Multiple Interest and Fine Granularity Network for User Modeling
- URL: http://arxiv.org/abs/2112.02591v1
- Date: Sun, 5 Dec 2021 15:12:08 GMT
- Title: Multiple Interest and Fine Granularity Network for User Modeling
- Authors: Jiaxuan Xie, Jianxiong Wei, Qingsong Hua, Yu Zhang
- Abstract summary: User modeling plays a fundamental role in industrial recommender systems, either in the matching stage and the ranking stage, in terms of both the customer experience and business revenue.
Most existing deep-learning based approaches exploit item-ids and category-ids but neglect fine-grained features like color and mate-rial, which hinders modeling the fine granularity of users' interests.
We present Multiple interest and Fine granularity Net-work (MFN), which tackle users' multiple and fine-grained interests and construct the model from both the similarity relationship and the combination relationship among the users' multiple interests.
- Score: 3.508126539399186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User modeling plays a fundamental role in industrial recommender systems,
either in the matching stage and the ranking stage, in terms of both the
customer experience and business revenue. How to extract users' multiple
interests effectively from their historical behavior sequences to improve the
relevance and personalization of the recommend results remains an open problem
for user modeling.Most existing deep-learning based approaches exploit item-ids
and category-ids but neglect fine-grained features like color and mate-rial,
which hinders modeling the fine granularity of users' interests.In the paper,
we present Multiple interest and Fine granularity Net-work (MFN), which tackle
users' multiple and fine-grained interests and construct the model from both
the similarity relationship and the combination relationship among the users'
multiple interests.Specifically, for modeling the similarity relationship, we
leverage two sets of embeddings, where one is the fixed embedding from
pre-trained models (e.g. Glove) to give the attention weights and the other is
trainable embedding to be trained with MFN together.For modeling the
combination relationship, self-attentive layers are exploited to build the
higher order combinations of different interest representations. In the
construction of network, we design an interest-extract module using attention
mechanism to capture multiple interest representations from user historical
behavior sequences and leverage an auxiliary loss to boost the distinction of
the interest representations. Then a hierarchical network is applied to model
the attention relation between the multiple interest vectors of different
granularities and the target item. We evaluate MFNon both public and industrial
datasets. The experimental results demonstrate that the proposed MFN achieves
superior performance than other existed representing methods.
Related papers
- BiVRec: Bidirectional View-based Multimodal Sequential Recommendation [55.87443627659778]
We propose an innovative framework, BivRec, that jointly trains the recommendation tasks in both ID and multimodal views.
BivRec achieves state-of-the-art performance on five datasets and showcases various practical advantages.
arXiv Detail & Related papers (2024-02-27T09:10:41Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - A Model-Agnostic Framework for Recommendation via Interest-aware Item
Embeddings [4.989653738257287]
Interest-aware Capsule network (IaCN) is a model-agnostic framework that directly learns interest-oriented item representations.
IaCN serves as an auxiliary task, enabling the joint learning of both item-based and interest-based representations.
We evaluate the proposed approach on benchmark datasets, exploring various scenarios involving different deep neural networks.
arXiv Detail & Related papers (2023-08-17T22:40:59Z) - M$^3$Net: Multi-view Encoding, Matching, and Fusion for Few-shot
Fine-grained Action Recognition [80.21796574234287]
M$3$Net is a matching-based framework for few-shot fine-grained (FS-FG) action recognition.
It incorporates textitmulti-view encoding, textitmulti-view matching, and textitmulti-view fusion to facilitate embedding encoding, similarity matching, and decision making.
Explainable visualizations and experimental results demonstrate the superiority of M$3$Net in capturing fine-grained action details.
arXiv Detail & Related papers (2023-08-06T09:15:14Z) - Deep Stable Multi-Interest Learning for Out-of-distribution Sequential
Recommendation [21.35873758251157]
We propose a novel multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL), which attempts to de-correlate the extracted interests in the model.
DESMIL incorporates a weighted correlation estimation loss based on Hilbert-Schmidt Independence Criterion (HSIC), with which training samples are weighted, to minimize the correlations among extracted interests.
arXiv Detail & Related papers (2023-04-12T05:13:54Z) - Coarse-to-Fine Knowledge-Enhanced Multi-Interest Learning Framework for
Multi-Behavior Recommendation [52.89816309759537]
Multi-types of behaviors (e.g., clicking, adding to cart, purchasing, etc.) widely exist in most real-world recommendation scenarios.
The state-of-the-art multi-behavior models learn behavior dependencies indistinguishably with all historical interactions as input.
We propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning framework to learn shared and behavior-specific interests for different behaviors.
arXiv Detail & Related papers (2022-08-03T05:28:14Z) - Improving Multi-Interest Network with Stable Learning [13.514488368734776]
We propose a novel multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL)
DESMIL tries to eliminate the influence of subtle dependencies among captured interests via learning weights for training samples.
We conduct extensive experiments on public recommendation datasets, a large-scale industrial dataset and the synthetic datasets.
arXiv Detail & Related papers (2022-07-14T07:49:28Z) - Modeling High-order Interactions across Multi-interests for Micro-video
Reommendation [65.16624625748068]
We propose a Self-over-Co Attention module to enhance user's interest representation.
In particular, we first use co-attention to model correlation patterns across different levels and then use self-attention to model correlation patterns within a specific level.
arXiv Detail & Related papers (2021-04-01T07:20:15Z) - MRIF: Multi-resolution Interest Fusion for Recommendation [0.0]
This paper presents a multi-resolution interest fusion model (MRIF) that takes both properties of users' interests into consideration.
The proposed model is capable to capture the dynamic changes in users' interests at different temporal-ranges, and provides an effective way to combine a group of multi-resolution user interests to make predictions.
arXiv Detail & Related papers (2020-07-08T02:32:15Z) - Disentangled Graph Collaborative Filtering [100.26835145396782]
Disentangled Graph Collaborative Filtering (DGCF) is a new model for learning informative representations of users and items from interaction data.
By modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations.
DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE.
arXiv Detail & Related papers (2020-07-03T15:37:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.