Towards High-Order Complementary Recommendation via Logical Reasoning
Network
- URL: http://arxiv.org/abs/2212.04966v1
- Date: Fri, 9 Dec 2022 16:27:03 GMT
- Title: Towards High-Order Complementary Recommendation via Logical Reasoning
Network
- Authors: Longfeng Wu, Yao Zhou, Dawei Zhou
- Abstract summary: We propose a logical reasoning network, LOGIREC, to learn embeddings of products.
LOGIREC is capable of capturing the asymmetric complementary relationship between products.
We also propose a hybrid network that is jointly optimized for learning a more generic product representation.
- Score: 19.232457960085625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complementary recommendation gains increasing attention in e-commerce since
it expedites the process of finding frequently-bought-with products for users
in their shopping journey. Therefore, learning the product representation that
can reflect this complementary relationship plays a central role in modern
recommender systems. In this work, we propose a logical reasoning network,
LOGIREC, to effectively learn embeddings of products as well as various
transformations (projection, intersection, negation) between them. LOGIREC is
capable of capturing the asymmetric complementary relationship between products
and seamlessly extending to high-order recommendations where more comprehensive
and meaningful complementary relationship is learned for a query set of
products. Finally, we further propose a hybrid network that is jointly
optimized for learning a more generic product representation. We demonstrate
the effectiveness of our LOGIREC on multiple public real-world datasets in
terms of various ranking-based metrics under both low-order and high-order
recommendation scenarios.
Related papers
- Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - Beyond Similarity: Personalized Federated Recommendation with Composite Aggregation [22.359428566363945]
Federated recommendation aims to collect global knowledge by aggregating local models from massive devices.
Current methods mainly leverage aggregation functions invented by federated vision community to aggregate parameters from similar clients.
We propose a personalized Federated recommendation model with Composite Aggregation (FedCA)
arXiv Detail & Related papers (2024-06-06T10:17:52Z) - BiVRec: Bidirectional View-based Multimodal Sequential Recommendation [55.87443627659778]
We propose an innovative framework, BivRec, that jointly trains the recommendation tasks in both ID and multimodal views.
BivRec achieves state-of-the-art performance on five datasets and showcases various practical advantages.
arXiv Detail & Related papers (2024-02-27T09:10:41Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - Two Is Better Than One: Dual Embeddings for Complementary Product
Recommendations [2.294014185517203]
We apply a novel approach to finding complementary items by leveraging dual embedding representations for products.
Our model is effective yet simple to implement, making it a great candidate for generating complementary item recommendations at any e-commerce website.
arXiv Detail & Related papers (2022-11-28T00:58:21Z) - Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
Retrieval [152.3504607706575]
This research aims to conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.
We first contribute the Product1M datasets, and define two real practical instance-level retrieval tasks.
We exploit to train a more effective cross-modal model which is adaptively capable of incorporating key concept information from the multi-modal data.
arXiv Detail & Related papers (2022-06-17T15:40:45Z) - ItemSage: Learning Product Embeddings for Shopping Recommendations at
Pinterest [60.841761065439414]
At Pinterest, we build a single set of product embeddings called ItemSage to provide relevant recommendations in all shopping use cases.
This approach has led to significant improvements in engagement and conversion metrics, while reducing both infrastructure and maintenance cost.
arXiv Detail & Related papers (2022-05-24T02:28:58Z) - Deep Reinforcement Learning-Based Product Recommender for Online
Advertising [1.7778609937758327]
This paper compares value-based and policy-based deep RL algorithms for designing recommender systems for online advertising.
The designed recommender systems aim at maximising the click-through rate (CTR) for the recommended items.
arXiv Detail & Related papers (2021-01-30T23:05:04Z) - CRACT: Cascaded Regression-Align-Classification for Robust Visual
Tracking [97.84109669027225]
We introduce an improved proposal refinement module, Cascaded Regression-Align- Classification (CRAC)
CRAC yields new state-of-the-art performances on many benchmarks.
In experiments on seven benchmarks including OTB-2015, UAV123, NfS, VOT-2018, TrackingNet, GOT-10k and LaSOT, our CRACT exhibits very promising results in comparison with state-of-the-art competitors.
arXiv Detail & Related papers (2020-11-25T02:18:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.