Item Recommendation from Implicit Feedback
- URL: http://arxiv.org/abs/2101.08769v1
- Date: Thu, 21 Jan 2021 18:50:21 GMT
- Title: Item Recommendation from Implicit Feedback
- Authors: Steffen Rendle
- Abstract summary: This article provides an overview of item recommendation, its unique characteristics and some common approaches.
The main body deals with learning algorithms and presents sampling based algorithms for general recommenders.
The application of item recommenders for retrieval tasks is discussed.
- Score: 8.877053863402484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of item recommendation is to select the best items for a user from a
large catalogue of items. Item recommenders are commonly trained from implicit
feedback which consists of past actions that are positive only. Core challenges
of item recommendation are (1) how to formulate a training objective from
implicit feedback and (2) how to efficiently train models over a large item
catalogue. This article provides an overview of item recommendation, its unique
characteristics and some common approaches. It starts with an introduction to
the problem and discusses different training objectives. The main body deals
with learning algorithms and presents sampling based algorithms for general
recommenders and more efficient algorithms for dot product models. Finally, the
application of item recommenders for retrieval tasks is discussed.
Related papers
- On Recommending Category: A Cascading Approach [20.84790649501248]
Category-level recommendation allows e-commerce platforms to promote users' engagements by expanding their interests to different types of items.<n>We propose a cascading category recommender (CCRec) model with a variational autoencoder (VAE) to encode item-level information to perform category-level recommendations.
arXiv Detail & Related papers (2025-12-17T23:32:33Z) - Slow Thinking for Sequential Recommendation [88.46598279655575]
We present a novel slow thinking recommendation model, named STREAM-Rec.
Our approach is capable of analyzing historical user behavior, generating a multi-step, deliberative reasoning process, and delivering personalized recommendations.
In particular, we focus on two key challenges: (1) identifying the suitable reasoning patterns in recommender systems, and (2) exploring how to effectively stimulate the reasoning capabilities of traditional recommenders.
arXiv Detail & Related papers (2025-04-13T15:53:30Z) - Pre-training Generative Recommender with Multi-Identifier Item Tokenization [78.87007819266957]
We propose MTGRec to augment token sequence data for Generative Recommender pre-training.
Our approach involves two key innovations: multi-identifier item tokenization and curriculum recommender pre-training.
Extensive experiments on three public benchmark datasets demonstrate that MTGRec significantly outperforms both traditional and generative recommendation baselines.
arXiv Detail & Related papers (2025-04-06T08:03:03Z) - Why Not Together? A Multiple-Round Recommender System for Queries and Items [37.709748983831034]
A fundamental technique of recommender systems involves modeling user preferences, where queries and items are widely used as symbolic representations of user interests.
We propose a novel approach named Multiple-round Auto Guess-and-Update System (MAGUS) that capitalizes on the synergies between both types.
arXiv Detail & Related papers (2024-12-14T10:49:00Z) - Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation [51.25461871988366]
We propose a sequential recommendation algorithm based on a pre-trained language model and knowledge distillation.
The proposed algorithm enhances recommendation accuracy and provide timely recommendation services.
arXiv Detail & Related papers (2024-09-23T08:39:07Z) - End-to-End Learnable Item Tokenization for Generative Recommendation [51.82768744368208]
We propose ETEGRec, a novel End-To-End Generative Recommender by seamlessly integrating item tokenization and generative recommendation.
Our framework is developed based on the dual encoder-decoder architecture, which consists of an item tokenizer and a generative recommender.
arXiv Detail & Related papers (2024-09-09T12:11:53Z) - Optimal Design for Human Preference Elicitation [17.520528548509944]
We study efficient human preference elicitation for learning preference models.
Key idea is to generalize optimal designs, a methodology for computing optimal information-gathering policies.
We show that our algorithms are practical by evaluating them on existing question-answering problems.
arXiv Detail & Related papers (2024-04-22T06:05:35Z) - RecRec: Algorithmic Recourse for Recommender Systems [41.97186998947909]
It is crucial for all stakeholders to understand the model's rationale behind making certain predictions and recommendations.
This is especially true for the content providers whose livelihoods depend on the recommender system.
We propose a recourse framework for recommender systems, targeted towards the content providers.
arXiv Detail & Related papers (2023-08-28T22:26:50Z) - Large Language Models are Zero-Shot Rankers for Recommender Systems [76.02500186203929]
This work aims to investigate the capacity of large language models (LLMs) to act as the ranking model for recommender systems.
We show that LLMs have promising zero-shot ranking abilities but struggle to perceive the order of historical interactions.
We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies.
arXiv Detail & Related papers (2023-05-15T17:57:39Z) - Zero-shot Item-based Recommendation via Multi-task Product Knowledge
Graph Pre-Training [106.85813323510783]
This paper presents a novel paradigm for the Zero-Shot Item-based Recommendation (ZSIR) task.
It pre-trains a model on product knowledge graph (PKG) to refine the item features from PLMs.
We identify three challenges for pre-training PKG, which are multi-type relations in PKG, semantic divergence between item generic information and relations and domain discrepancy from PKG to downstream ZSIR task.
arXiv Detail & Related papers (2023-05-12T17:38:24Z) - How to Index Item IDs for Recommendation Foundation Models [49.425959632372425]
Recommendation foundation model utilizes large language models (LLM) for recommendation by converting recommendation tasks into natural language tasks.
To avoid generating excessively long text and hallucinated recommendations, creating LLM-compatible item IDs is essential.
We propose four simple yet effective solutions, including sequential indexing, collaborative indexing, semantic (content-based) indexing, and hybrid indexing.
arXiv Detail & Related papers (2023-05-11T05:02:37Z) - Hierarchical Conversational Preference Elicitation with Bandit Feedback [36.507341041113825]
We formulate a new conversational bandit problem that allows the recommender system to choose either a key-term or an item to recommend at each round.
We conduct a survey and analyze a real-world dataset to find that, unlike assumptions made in prior works, key-term rewards are mainly affected by rewards of representative items.
We propose two bandit algorithms, Hier-UCB and Hier-LinUCB, that leverage this observed relationship and the hierarchical structure between key-terms and items.
arXiv Detail & Related papers (2022-09-06T05:35:24Z) - ELECRec: Training Sequential Recommenders as Discriminators [94.93227906678285]
Sequential recommendation is often considered as a generative task, i.e., training a sequential encoder to generate the next item of a user's interests.
We propose to train the sequential recommenders as discriminators rather than generators.
Our method trains a discriminator to distinguish if a sampled item is a'real' target item or not.
arXiv Detail & Related papers (2022-04-05T06:19:45Z) - Batch versus Sequential Active Learning for Recommender Systems [3.7796614675664397]
We show that sequential mode produces the most accurate recommendations for dense data sets.
For most active learners, the best predictor turned out to be FunkSVD in combination with sequential mode.
arXiv Detail & Related papers (2022-01-19T12:50:36Z) - Comparative Explanations of Recommendations [33.89230323979306]
We develop an extract-and-refine architecture to explain the relative comparisons among a set of ranked items from a recommender system.
We design a new explanation quality metric based on BLEU to guide the end-to-end training of the extraction and refinement components.
arXiv Detail & Related papers (2021-11-01T02:55:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.