Can you recommend content to creatives instead of final consumers? A
RecSys based on user's preferred visual styles
- URL: http://arxiv.org/abs/2208.10902v1
- Date: Tue, 23 Aug 2022 12:11:28 GMT
- Title: Can you recommend content to creatives instead of final consumers? A
RecSys based on user's preferred visual styles
- Authors: Raul Gomez Bruballa, Lauren Burnham-King, Alessandra Sala
- Abstract summary: This report is an extension of the paper "Learning Users' Preferred Visual Styles in an Image Marketplace", presented at ACM RecSys '22.
We design a RecSys that learns visual styles preferences to the semantics of the projects users work on.
- Score: 69.69160476215895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing meaningful recommendations in a content marketplace is challenging
due to the fact that users are not the final content consumers. Instead, most
users are creatives whose interests, linked to the projects they work on,
change rapidly and abruptly. To address the challenging task of recommending
images to content creators, we design a RecSys that learns visual styles
preferences transversal to the semantics of the projects users work on. We
analyze the challenges of the task compared to content-based recommendations
driven by semantics, propose an evaluation setup, and explain its applications
in a global image marketplace.
This technical report is an extension of the paper "Learning Users' Preferred
Visual Styles in an Image Marketplace", presented at ACM RecSys '22.
Related papers
- Influencer: Empowering Everyday Users in Creating Promotional Posts via AI-infused Exploration and Customization [11.9449656506593]
Influen is an interactive tool to assist novice creators in crafting high-quality promotional post designs.
Within Influencer, we contribute a multi-dimensional recommendation framework that allows users to intuitively generate new ideas.
Influential implements a holistic promotional post design system that supports context-aware image and caption exploration.
arXiv Detail & Related papers (2024-07-20T16:27:49Z) - Empowering Visual Creativity: A Vision-Language Assistant to Image Editing Recommendations [109.65267337037842]
We introduce the task of Image Editing Recommendation (IER)
IER aims to automatically generate diverse creative editing instructions from an input image and a simple prompt representing the users' under-specified editing purpose.
We introduce Creativity-Vision Language Assistant(Creativity-VLA), a multimodal framework designed specifically for edit-instruction generation.
arXiv Detail & Related papers (2024-05-31T18:22:29Z) - Social Reward: Evaluating and Enhancing Generative AI through
Million-User Feedback from an Online Creative Community [63.949893724058846]
Social reward as a form of community recognition provides a strong source of motivation for users of online platforms to engage and contribute with content.
This work pioneers a paradigm shift, unveiling Social Reward - an innovative reward modeling framework.
We embark on an extensive journey of dataset curation and refinement, drawing from Picsart: an online visual creation and editing platform.
arXiv Detail & Related papers (2024-02-15T10:56:31Z) - Tell Me What Is Good About This Property: Leveraging Reviews For
Segment-Personalized Image Collection Summarization [3.063926257586959]
We consider user intentions in the summarization of property visuals by analyzing property reviews.
By incorporating the insights from reviews in our visual summaries, we enhance the summaries by presenting the relevant content to a user.
Our experiments, including human perceptual studies, demonstrate the superiority of our cross-modal approach.
arXiv Detail & Related papers (2023-10-30T17:06:49Z) - VILA: Learning Image Aesthetics from User Comments with Vision-Language
Pretraining [53.470662123170555]
We propose learning image aesthetics from user comments, and exploring vision-language pretraining methods to learn multimodal aesthetic representations.
Specifically, we pretrain an image-text encoder-decoder model with image-comment pairs, using contrastive and generative objectives to learn rich and generic aesthetic semantics without human labels.
Our results show that our pretrained aesthetic vision-language model outperforms prior works on image aesthetic captioning over the AVA-Captions dataset.
arXiv Detail & Related papers (2023-03-24T23:57:28Z) - The Elements of Visual Art Recommendation: Learning Latent Semantic
Representations of Paintings [7.79230326339002]
Artwork recommendation is challenging because it requires understanding how users interact with highly subjective content.
In this paper, we focus on efficiently capturing the elements (i.e., latent semantic relationships) of visual art for personalized recommendation.
arXiv Detail & Related papers (2023-02-28T18:17:36Z) - Two-stage Visual Cues Enhancement Network for Referring Image
Segmentation [89.49412325699537]
Referring Image (RIS) aims at segmenting the target object from an image referred by one given natural language expression.
In this paper, we tackle this problem by devising a Two-stage Visual cues enhancement Network (TV-Net)
Through the two-stage enhancement, our proposed TV-Net enjoys better performances in learning fine-grained matching behaviors between the natural language expression and image.
arXiv Detail & Related papers (2021-10-09T02:53:39Z) - Feedback Shaping: A Modeling Approach to Nurture Content Creation [10.31854532203776]
We propose a modeling approach to predict how feedback from content consumers incentivizes creators.
We then leverage this model to optimize the newsfeed experience for content creators by reshaping the feedback distribution.
We present a deployed use case on the LinkedIn newsfeed, where we used this approach to improve content creation significantly without compromising the consumers' experience.
arXiv Detail & Related papers (2021-06-21T22:53:16Z) - User-Inspired Posterior Network for Recommendation Reason Generation [53.035224183349385]
Recommendation reason generation plays a vital role in attracting customers' attention as well as improving user experience.
We propose a user-inspired multi-source posterior transformer (MSPT), which induces the model reflecting the users' interests.
Experimental results show that our model is superior to traditional generative models.
arXiv Detail & Related papers (2021-02-16T02:08:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.