USER: A Unified Information Search and Recommendation Model based on
Integrated Behavior Sequence
- URL: http://arxiv.org/abs/2109.15012v1
- Date: Thu, 30 Sep 2021 11:06:15 GMT
- Title: USER: A Unified Information Search and Recommendation Model based on
Integrated Behavior Sequence
- Authors: Jing Yao, Zhicheng Dou, Ruobing Xie, Yanxiong Lu, Zhiping Wang,
Ji-Rong Wen
- Abstract summary: We argue that jointly modeling these two tasks will benefit both of them and finally improve overall user satisfaction.
We propose first integrating the user's behaviors in search and recommendation into a heterogeneous behavior sequence, then utilizing a joint model for handling both tasks.
- Score: 36.91974576050925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Search and recommendation are the two most common approaches used by people
to obtain information. They share the same goal -- satisfying the user's
information need at the right time. There are already a lot of Internet
platforms and Apps providing both search and recommendation services, showing
us the demand and opportunity to simultaneously handle both tasks. However,
most platforms consider these two tasks independently -- they tend to train
separate search model and recommendation model, without exploiting the
relatedness and dependency between them. In this paper, we argue that jointly
modeling these two tasks will benefit both of them and finally improve overall
user satisfaction. We investigate the interactions between these two tasks in
the specific information content service domain. We propose first integrating
the user's behaviors in search and recommendation into a heterogeneous behavior
sequence, then utilizing a joint model for handling both tasks based on the
unified sequence. More specifically, we design the Unified Information Search
and Recommendation model (USER), which mines user interests from the integrated
sequence and accomplish the two tasks in a unified way.
Related papers
- Bridging Search and Recommendation in Generative Retrieval: Does One Task Help the Other? [9.215695600542249]
Generative retrieval for search and recommendation is a promising paradigm for retrieving items.
These generative systems can play a crucial role in centralizing a variety of Information Retrieval (IR) tasks in a single model.
This paper investigates whether and when such a unified approach can outperform task-specific models in the IR tasks of search and recommendation.
arXiv Detail & Related papers (2024-10-22T08:49:43Z) - Unified Dual-Intent Translation for Joint Modeling of Search and Recommendation [44.59113848489519]
We propose a novel model named Unified Dual-Intents Translation for joint modeling of Search and Recommendation (UDITSR)
To accurately simulate users' demand intents in recommendation, we utilize real queries from search data as supervision information to guide its generation.
Extensive experiments demonstrate that UDITSR outperforms SOTA baselines both in search and recommendation tasks.
arXiv Detail & Related papers (2024-07-01T02:36:03Z) - A Decoupling and Aggregating Framework for Joint Extraction of Entities and Relations [7.911978021993282]
We propose a novel model to jointly extract entities and relations.
We propose to decouple the feature encoding process into three parts, namely encoding subjects, encoding objects, and encoding relations.
Our model outperforms several previous state-of-the-art models.
arXiv Detail & Related papers (2024-05-14T04:27:16Z) - BiVRec: Bidirectional View-based Multimodal Sequential Recommendation [55.87443627659778]
We propose an innovative framework, BivRec, that jointly trains the recommendation tasks in both ID and multimodal views.
BivRec achieves state-of-the-art performance on five datasets and showcases various practical advantages.
arXiv Detail & Related papers (2024-02-27T09:10:41Z) - CARE: Co-Attention Network for Joint Entity and Relation Extraction [0.0]
We propose a Co-Attention network for joint entity and relation extraction.
Our approach includes adopting a parallel encoding strategy to learn separate representations for each subtask.
At the core of our approach is the co-attention module that captures two-way interaction between the two subtasks.
arXiv Detail & Related papers (2023-08-24T03:40:54Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Exploring Relational Context for Multi-Task Dense Prediction [76.86090370115]
We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
arXiv Detail & Related papers (2021-04-28T16:45:56Z) - CoADNet: Collaborative Aggregation-and-Distribution Networks for
Co-Salient Object Detection [91.91911418421086]
Co-Salient Object Detection (CoSOD) aims at discovering salient objects that repeatedly appear in a given query group containing two or more relevant images.
One challenging issue is how to effectively capture co-saliency cues by modeling and exploiting inter-image relationships.
We present an end-to-end collaborative aggregation-and-distribution network (CoADNet) to capture both salient and repetitive visual patterns from multiple images.
arXiv Detail & Related papers (2020-11-10T04:28:11Z) - A Co-Interactive Transformer for Joint Slot Filling and Intent Detection [61.109486326954205]
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system.
Previous studies either model the two tasks separately or only consider the single information flow from intent to slot.
We propose a Co-Interactive Transformer to consider the cross-impact between the two tasks simultaneously.
arXiv Detail & Related papers (2020-10-08T10:16:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.