The Elements of Visual Art Recommendation: Learning Latent Semantic
Representations of Paintings
- URL: http://arxiv.org/abs/2303.08182v1
- Date: Tue, 28 Feb 2023 18:17:36 GMT
- Title: The Elements of Visual Art Recommendation: Learning Latent Semantic
Representations of Paintings
- Authors: Bereket A. Yilma and Luis A. Leiva
- Abstract summary: Artwork recommendation is challenging because it requires understanding how users interact with highly subjective content.
In this paper, we focus on efficiently capturing the elements (i.e., latent semantic relationships) of visual art for personalized recommendation.
- Score: 7.79230326339002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artwork recommendation is challenging because it requires understanding how
users interact with highly subjective content, the complexity of the concepts
embedded within the artwork, and the emotional and cognitive reflections they
may trigger in users. In this paper, we focus on efficiently capturing the
elements (i.e., latent semantic relationships) of visual art for personalized
recommendation. We propose and study recommender systems based on textual and
visual feature learning techniques, as well as their combinations. We then
perform a small-scale and a large-scale user-centric evaluation of the quality
of the recommendations. Our results indicate that textual features compare
favourably with visual ones, whereas a fusion of both captures the most
suitable hidden semantic relationships for artwork recommendation. Ultimately,
this paper contributes to our understanding of how to deliver content that
suitably matches the user's interests and how they are perceived.
Related papers
- Textual Aesthetics in Large Language Models [80.09790024030525]
We introduce a pipeline for aesthetics polishing and help construct a textual aesthetics dataset named TexAes.
We propose a textual aesthetics-powered fine-tuning method based on direct preference optimization, termed TAPO.
Our experiments demonstrate that using textual aesthetics data and employing the TAPO fine-tuning method not only improves aesthetic scores but also enhances performance on general evaluation datasets.
arXiv Detail & Related papers (2024-11-05T09:22:08Z) - Tell Me What Is Good About This Property: Leveraging Reviews For
Segment-Personalized Image Collection Summarization [3.063926257586959]
We consider user intentions in the summarization of property visuals by analyzing property reviews.
By incorporating the insights from reviews in our visual summaries, we enhance the summaries by presenting the relevant content to a user.
Our experiments, including human perceptual studies, demonstrate the superiority of our cross-modal approach.
arXiv Detail & Related papers (2023-10-30T17:06:49Z) - Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models [64.24227572048075]
We propose a Knowledge-Aware Prompt Tuning (KAPT) framework for vision-language models.
Our approach takes inspiration from human intelligence in which external knowledge is usually incorporated into recognizing novel categories of objects.
arXiv Detail & Related papers (2023-08-22T04:24:45Z) - StyleEDL: Style-Guided High-order Attention Network for Image Emotion
Distribution Learning [69.06749934902464]
We propose a style-guided high-order attention network for image emotion distribution learning termed StyleEDL.
StyleEDL interactively learns stylistic-aware representations of images by exploring the hierarchical stylistic information of visual contents.
In addition, we introduce a stylistic graph convolutional network to dynamically generate the content-dependent emotion representations.
arXiv Detail & Related papers (2023-08-06T03:22:46Z) - Prompt Tuning Large Language Models on Personalized Aspect Extraction
for Recommendations [26.519571240032967]
We propose to combine aspect extraction together with aspect-based recommendations in an end-to-end manner.
Our proposed framework significantly outperforms state-of-the-art baselines in both the personalized aspect extraction and aspect-based recommendation tasks.
arXiv Detail & Related papers (2023-06-02T12:00:03Z) - VILA: Learning Image Aesthetics from User Comments with Vision-Language
Pretraining [53.470662123170555]
We propose learning image aesthetics from user comments, and exploring vision-language pretraining methods to learn multimodal aesthetic representations.
Specifically, we pretrain an image-text encoder-decoder model with image-comment pairs, using contrastive and generative objectives to learn rich and generic aesthetic semantics without human labels.
Our results show that our pretrained aesthetic vision-language model outperforms prior works on image aesthetic captioning over the AVA-Captions dataset.
arXiv Detail & Related papers (2023-03-24T23:57:28Z) - Can you recommend content to creatives instead of final consumers? A
RecSys based on user's preferred visual styles [69.69160476215895]
This report is an extension of the paper "Learning Users' Preferred Visual Styles in an Image Marketplace", presented at ACM RecSys '22.
We design a RecSys that learns visual styles preferences to the semantics of the projects users work on.
arXiv Detail & Related papers (2022-08-23T12:11:28Z) - Discovering Personalized Semantics for Soft Attributes in Recommender
Systems using Concept Activation Vectors [34.56323846959459]
Interactive recommender systems allow users to express intent, preferences, constraints, and contexts in a richer fashion.
One challenge is inferring a user's semantic intent from the open-ended terms or attributes often used to describe a desired item.
We develop a framework to learn a representation that captures the semantics of such attributes and connects them to user preferences and behaviors in recommender systems.
arXiv Detail & Related papers (2022-02-06T18:45:15Z) - Personalised Visual Art Recommendation by Learning Latent Semantic
Representations [0.0]
We introduce an approach for Personalised Recommendation of Visual arts based on learning latent semantic representation of paintings.
Our LDA model manages to successfully uncover non-obvious semantic relationships between paintings whilst being able to offer explainable recommendations.
arXiv Detail & Related papers (2020-07-24T14:50:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.