Fashionpedia-Taste: A Dataset towards Explaining Human Fashion Taste
- URL: http://arxiv.org/abs/2305.02307v1
- Date: Wed, 3 May 2023 17:54:50 GMT
- Title: Fashionpedia-Taste: A Dataset towards Explaining Human Fashion Taste
- Authors: Mengyun Shi, Serge Belongie, Claire Cardie
- Abstract summary: We introduce an interpretability dataset, Fashionpedia-taste, to explain why a subject like or dislike a fashion image.
Subjects are asked to provide their personal attributes and preference on fashion, such as personality and preferred fashion brands.
Our dataset makes it possible for researchers to build computational models to fully understand and interpret human fashion taste.
- Score: 30.633812626305552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing fashion datasets do not consider the multi-facts that cause a
consumer to like or dislike a fashion image. Even two consumers like a same
fashion image, they could like this image for total different reasons. In this
paper, we study the reason why a consumer like a certain fashion image. Towards
this goal, we introduce an interpretability dataset, Fashionpedia-taste,
consist of rich annotation to explain why a subject like or dislike a fashion
image from the following 3 perspectives: 1) localized attributes; 2) human
attention; 3) caption. Furthermore, subjects are asked to provide their
personal attributes and preference on fashion, such as personality and
preferred fashion brands. Our dataset makes it possible for researchers to
build computational models to fully understand and interpret human fashion
taste from different humanistic perspectives and modalities.
Related papers
- Social Media Fashion Knowledge Extraction as Captioning [61.41631195195498]
We study the task of social media fashion knowledge extraction.
We transform the fashion knowledges into a natural language caption with a sentence transformation method.
Our framework then aims to generate the sentence-based fashion knowledge directly from the social media post.
arXiv Detail & Related papers (2023-09-28T09:07:48Z) - Fashionpedia-Ads: Do Your Favorite Advertisements Reveal Your Fashion
Taste? [30.633812626305552]
We study the correlation between advertisements and fashion taste.
We introduce a new dataset, Fashionpedia-Ads, which asks subjects to provide their preferences on both ad (fashion, beauty, car, and dessert) and fashion product (social network and e-commerce style) images.
arXiv Detail & Related papers (2023-05-03T18:00:42Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - From Culture to Clothing: Discovering the World Events Behind A Century
of Fashion Images [100.20851232528925]
We propose a data-driven approach to identify specific cultural factors affecting the clothes people wear.
Our work is a first step towards a computational, scalable, and easily refreshable approach to link culture to clothing.
arXiv Detail & Related papers (2021-02-02T18:58:21Z) - Aesthetics, Personalization and Recommendation: A survey on Deep
Learning in Fashion [3.202857828083949]
The survey shows remarkable approaches that encroach the subject of achieving that by divulging deep into how visual data can be interpreted and leveraged.
Aesthetics play a vital role in clothing recommendation as users' decision depends largely on whether the clothing is in line with their aesthetics, however the conventional image features cannot portray this directly.
The survey also highlights remarkable models like tensor factorization model, conditional random field model among others to cater the need to acknowledge aesthetics as an important factor in Apparel recommendation.
arXiv Detail & Related papers (2021-01-20T19:57:13Z) - Modeling Fashion Influence from Photos [108.58097776743331]
We explore fashion influence along two channels: geolocation and fashion brands.
We leverage public large-scale datasets of 7.7M Instagram photos from 44 major world cities.
Our results indicate the advantage of grounding visual style evolution both spatially and temporally.
arXiv Detail & Related papers (2020-11-17T20:24:03Z) - Fashionpedia: Ontology, Segmentation, and an Attribute Localization
Dataset [62.77342894987297]
We propose a novel Attribute-Mask RCNN model to jointly perform instance segmentation and localized attribute recognition.
We also demonstrate instance segmentation models pre-trained on Fashionpedia achieve better transfer learning performance on other fashion datasets than ImageNet pre-training.
arXiv Detail & Related papers (2020-04-26T02:38:26Z) - Fashion Meets Computer Vision: A Survey [41.41993143419999]
This paper provides a comprehensive survey of more than 200 major fashion-related works covering four main aspects for enabling intelligent fashion.
For each task, the benchmark datasets and the evaluation protocols are summarized.
arXiv Detail & Related papers (2020-03-31T07:08:23Z) - Can AI decrypt fashion jargon for you? [24.45460909986741]
It is not clear to people how exactly those low level descriptions can contribute to a style or any high level fashion concept.
In this paper, we proposed a data driven solution to address this concept understanding issues by leveraging a large number of existing product data on fashion sites.
We trained a deep learning model that can explicitly predict and explain high level fashion concepts in a product image with its low level and domain specific fashion features.
arXiv Detail & Related papers (2020-03-18T05:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.