Personalised Visual Art Recommendation by Learning Latent Semantic
Representations
- URL: http://arxiv.org/abs/2008.02687v1
- Date: Fri, 24 Jul 2020 14:50:10 GMT
- Title: Personalised Visual Art Recommendation by Learning Latent Semantic
Representations
- Authors: Bereket Abera Yilma, Najib Aghenda, Marcelo Romero, Yannick Naudet and
Herve Panetto
- Abstract summary: We introduce an approach for Personalised Recommendation of Visual arts based on learning latent semantic representation of paintings.
Our LDA model manages to successfully uncover non-obvious semantic relationships between paintings whilst being able to offer explainable recommendations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Recommender systems, data representation techniques play a great role as
they have the power to entangle, hide and reveal explanatory factors embedded
within datasets. Hence, they influence the quality of recommendations.
Specifically, in Visual Art (VA) recommendations the complexity of the concepts
embodied within paintings, makes the task of capturing semantics by machines
far from trivial. In VA recommendation, prominent works commonly use manually
curated metadata to drive recommendations. Recent works in this domain aim at
leveraging visual features extracted using Deep Neural Networks (DNN). However,
such data representation approaches are resource demanding and do not have a
direct interpretation, hindering user acceptance. To address these limitations,
we introduce an approach for Personalised Recommendation of Visual arts based
on learning latent semantic representation of paintings. Specifically, we
trained a Latent Dirichlet Allocation (LDA) model on textual descriptions of
paintings. Our LDA model manages to successfully uncover non-obvious semantic
relationships between paintings whilst being able to offer explainable
recommendations. Experimental evaluations demonstrate that our method tends to
perform better than exploiting visual features extracted using pre-trained Deep
Neural Networks.
Related papers
- Instructing Prompt-to-Prompt Generation for Zero-Shot Learning [116.33775552866476]
We propose a textbfPrompt-to-textbfPrompt generation methodology (textbfP2P) to distill instructive visual prompts for transferable knowledge discovery.
The core of P2P is to mine semantic-related instruction from prompt-conditioned visual features and text instruction on modal-sharing semantic concepts.
arXiv Detail & Related papers (2024-06-05T07:59:48Z) - Vision-Language Models Provide Promptable Representations for Reinforcement Learning [67.40524195671479]
We propose a novel approach that uses the vast amounts of general and indexable world knowledge encoded in vision-language models (VLMs) pre-trained on Internet-scale data for embodied reinforcement learning (RL)
We show that our approach can use chain-of-thought prompting to produce representations of common-sense semantic reasoning, improving policy performance in novel scenes by 1.5 times.
arXiv Detail & Related papers (2024-02-05T00:48:56Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - The Elements of Visual Art Recommendation: Learning Latent Semantic
Representations of Paintings [7.79230326339002]
Artwork recommendation is challenging because it requires understanding how users interact with highly subjective content.
In this paper, we focus on efficiently capturing the elements (i.e., latent semantic relationships) of visual art for personalized recommendation.
arXiv Detail & Related papers (2023-02-28T18:17:36Z) - Bridging the visual gap in VLN via semantically richer instructions [3.5789352263336847]
We show that state-of-the-art models are not severely affected when they receive just limited or even no visual data.
We propose a new data augmentation method that fosters the inclusion of more explicit visual information.
arXiv Detail & Related papers (2022-10-27T15:58:07Z) - Rating and aspect-based opinion graph embeddings for explainable
recommendations [69.9674326582747]
We propose to exploit embeddings extracted from graphs that combine information from ratings and aspect-based opinions expressed in textual reviews.
We then adapt and evaluate state-of-the-art graph embedding techniques over graphs generated from Amazon and Yelp reviews on six domains, outperforming baseline recommenders.
arXiv Detail & Related papers (2021-07-07T14:07:07Z) - A Survey on Neural Recommendation: From Collaborative Filtering to
Content and Context Enriched Recommendation [70.69134448863483]
Research in recommendation has shifted to inventing new recommender models based on neural networks.
In recent years, we have witnessed significant progress in developing neural recommender models.
arXiv Detail & Related papers (2021-04-27T08:03:52Z) - Making Neural Networks Interpretable with Attribution: Application to
Implicit Signals Prediction [11.427019313283997]
We propose a novel formulation of interpretable deep neural networks for the attribution task.
Using masked weights, hidden features can be deeply attributed, split into several input-restricted sub-networks and trained as a boosted mixture of experts.
arXiv Detail & Related papers (2020-08-26T06:46:49Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Semantically-Guided Representation Learning for Self-Supervised
Monocular Depth [40.49380547487908]
We propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning.
Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories.
arXiv Detail & Related papers (2020-02-27T18:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.