Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations
- URL: http://arxiv.org/abs/2001.10341v1
- Date: Sat, 18 Jan 2020 13:16:32 GMT
- Title: Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations
- Authors: Huanrui Luo, Ning Yang, Philip S. Yu
- Abstract summary: We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
- Score: 60.78696727039764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable recommendation is far from being well solved partly due to three
challenges. The first is the personalization of preference learning, which
requires that different items/users have different contributions to the
learning of user preference or item quality. The second one is dynamic
explanation, which is crucial for the timeliness of recommendation
explanations. The last one is the granularity of explanations. In practice,
aspect-level explanations are more persuasive than item-level or user-level
ones. In this paper, to address these challenges simultaneously, we propose a
novel model called Hybrid Deep Embedding (HDE) for aspect-based explainable
recommendations, which can make recommendations with dynamic aspect-level
explanations. The main idea of HDE is to learn the dynamic embeddings of users
and items for rating prediction and the dynamic latent aspect
preference/quality vectors for the generation of aspect-level explanations,
through fusion of the dynamic implicit feedbacks extracted from reviews and the
attentive user-item interactions. Particularly, as the aspect
preference/quality of users/items is learned automatically, HDE is able to
capture the impact of aspects that are not mentioned in reviews of a user or an
item. The extensive experiments conducted on real datasets verify the
recommending performance and explainability of HDE. The source code of our work
is available at \url{https://github.com/lola63/HDE-Python}
Related papers
- MAPLE: Enhancing Review Generation with Multi-Aspect Prompt LEarning in Explainable Recommendation [12.874105550787514]
We propose a personalized, aspect-controlled model called Multi-Aspect Prompt LEarner (MAPLE)
Experiments on two real-world review datasets in restaurant domain show that MAPLE outperforms the baseline review-generation models in terms of text.
arXiv Detail & Related papers (2024-08-19T10:12:52Z) - A Large Language Model Enhanced Sequential Recommender for Joint Video and Comment Recommendation [77.42486522565295]
We propose a novel recommendation approach called LSVCR to jointly conduct personalized video and comment recommendation.
Our approach consists of two key components, namely sequential recommendation (SR) model and supplemental large language model (LLM) recommender.
In particular, we achieve a significant overall gain of 4.13% in comment watch time.
arXiv Detail & Related papers (2024-03-20T13:14:29Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - Explainable Recommender with Geometric Information Bottleneck [25.703872435370585]
We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
arXiv Detail & Related papers (2023-05-09T10:38:36Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Knowledge-Enhanced Hierarchical Graph Transformer Network for
Multi-Behavior Recommendation [56.12499090935242]
This work proposes a Knowledge-Enhanced Hierarchical Graph Transformer Network (KHGT) to investigate multi-typed interactive patterns between users and items in recommender systems.
KHGT is built upon a graph-structured neural architecture to capture type-specific behavior characteristics.
We show that KHGT consistently outperforms many state-of-the-art recommendation methods across various evaluation settings.
arXiv Detail & Related papers (2021-10-08T09:44:00Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - Attribute-aware Explainable Complementary Clothing Recommendation [37.30129304097086]
This work aims to tackle the explainability challenge in fashion recommendation tasks by proposing a novel Attribute-aware Fashion Recommender (AFRec)
AFRec recommender assesses the outfit compatibility by explicitly leveraging the extracted attribute-level representations from each item's visual feature.
The attributes serve as the bridge between two fashion items, where we quantify the affinity of a pair of items through the learned compatibility between their attributes.
arXiv Detail & Related papers (2021-07-04T14:56:07Z) - Joint Item Recommendation and Attribute Inference: An Adaptive Graph
Convolutional Network Approach [61.2786065744784]
In recommender systems, users and items are associated with attributes, and users show preferences to items.
As annotating user (item) attributes is a labor intensive task, the attribute values are often incomplete with many missing attribute values.
We propose an Adaptive Graph Convolutional Network (AGCN) approach for joint item recommendation and attribute inference.
arXiv Detail & Related papers (2020-05-25T10:50:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.