Cluster Based Deep Contextual Reinforcement Learning for top-k
Recommendations
- URL: http://arxiv.org/abs/2012.02291v1
- Date: Sun, 29 Nov 2020 20:24:39 GMT
- Title: Cluster Based Deep Contextual Reinforcement Learning for top-k
Recommendations
- Authors: Anubha Kabra, Anu Agarwal, Anil Singh Parihar
- Abstract summary: We propose a novel method for generating top-k recommendations by creating an ensemble of clustering with reinforcement learning.
We have incorporated DB Scan clustering to tackle vast item space, hence in-creasing the efficiency multi-fold.
With partial updates and batch updates, the model learns user patterns continuously.
- Score: 2.8207195763355704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rapid advancements in the E-commerce sector over the last few decades have
led to an imminent need for personalised, efficient and dynamic recommendation
systems. To sufficiently cater to this need, we propose a novel method for
generating top-k recommendations by creating an ensemble of clustering with
reinforcement learning. We have incorporated DB Scan clustering to tackle vast
item space, hence in-creasing the efficiency multi-fold. Moreover, by using
deep contextual reinforcement learning, our proposed work leverages the user
features to its full potential. With partial updates and batch updates, the
model learns user patterns continuously. The Duelling Bandit based exploration
provides robust exploration as compared to the state-of-art strategies due to
its adaptive nature. Detailed experiments conducted on a public dataset verify
our claims about the efficiency of our technique as com-pared to existing
techniques.
Related papers
- Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning [41.13568563835089]
We find that applying human habits of organizing and connecting information can serve as an efficient strategy when training deep learning models.
We propose a novel regularization loss function that encourages models to focus more on challenging knowledge areas.
arXiv Detail & Related papers (2024-10-06T01:30:40Z) - Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation [51.06031200728449]
We propose a novel framework called mccHRL to provide different levels of temporal abstraction on listwise recommendation.
Within the hierarchical framework, the high-level agent studies the evolution of user perception, while the low-level agent produces the item selection policy.
Results observe significant performance improvement by our method, compared with several well-known baselines.
arXiv Detail & Related papers (2024-09-11T17:01:06Z) - MMREC: LLM Based Multi-Modal Recommender System [2.3113916776957635]
This paper presents a novel approach to enhancing recommender systems by leveraging Large Language Models (LLMs) and deep learning techniques.
The proposed framework aims to improve the accuracy and relevance of recommendations by incorporating multi-modal information processing and by the use of unified latent space representation.
arXiv Detail & Related papers (2024-08-08T04:31:29Z) - Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Large Language Models (LLMs) pretrained on massive text corpus presents a promising avenue for enhancing recommender systems.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Embedding in Recommender Systems: A Survey [67.67966158305603]
A crucial aspect is embedding techniques that covert the high-dimensional discrete features, such as user and item IDs, into low-dimensional continuous vectors.
Applying embedding techniques captures complex entity relationships and has spurred substantial research.
This survey covers embedding methods like collaborative filtering, self-supervised learning, and graph-based techniques.
arXiv Detail & Related papers (2023-10-28T06:31:06Z) - Hierarchical Reinforcement Learning for Modeling User Novelty-Seeking
Intent in Recommender Systems [26.519571240032967]
We propose a novel hierarchical reinforcement learning-based method to model the hierarchical user novelty-seeking intent.
We further incorporate diversity and novelty-related measurement in the reward function of the hierarchical RL (HRL) agent to encourage user exploration.
arXiv Detail & Related papers (2023-06-02T12:02:23Z) - A Deep Reinforcement Learning Approach for Composing Moving IoT Services [0.12891210250935145]
We introduce a moving crowdsourced service model which is modelled as a moving region.
We propose a deep reinforcement learning-based composition approach to select and compose moving IoT services.
The experiments on two real-world datasets verify the effectiveness and efficiency of the deep reinforcement learning-based approach.
arXiv Detail & Related papers (2021-11-06T22:02:31Z) - Generative Inverse Deep Reinforcement Learning for Online Recommendation [62.09946317831129]
We propose a novel inverse reinforcement learning approach, namely InvRec, for online recommendation.
InvRec extracts the reward function from user's behaviors automatically, for online recommendation.
arXiv Detail & Related papers (2020-11-04T12:12:25Z) - Knowledge-guided Deep Reinforcement Learning for Interactive
Recommendation [49.32287384774351]
Interactive recommendation aims to learn from dynamic interactions between items and users to achieve responsiveness and accuracy.
We propose Knowledge-Guided deep Reinforcement learning to harness the advantages of both reinforcement learning and knowledge graphs for interactive recommendation.
arXiv Detail & Related papers (2020-04-17T05:26:47Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.