A Tale of Color Variants: Representation and Self-Supervised Learning in
Fashion E-Commerce
- URL: http://arxiv.org/abs/2112.02910v1
- Date: Mon, 6 Dec 2021 10:24:54 GMT
- Title: A Tale of Color Variants: Representation and Self-Supervised Learning in
Fashion E-Commerce
- Authors: Ujjal Kr Dutta, Sandeep Repakula, Maulik Parmar, Abhinav Ravi
- Abstract summary: We propose a generic framework, that leverages deep visual Representation Learning at its heart, to address this problem for our fashion e-commerce platform.
Our framework could be trained with supervisory signals in the form of triplets, that are obtained manually.
But, to our rescue, interestingly we observed that this crucial problem in fashion e-commerce could also be solved by simple color jitter based image augmentation.
- Score: 2.3449131636069898
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we address a crucial problem in fashion e-commerce (with
respect to customer experience, as well as revenue): color variants
identification, i.e., identifying fashion products that match exactly in their
design (or style), but only to differ in their color. We propose a generic
framework, that leverages deep visual Representation Learning at its heart, to
address this problem for our fashion e-commerce platform. Our framework could
be trained with supervisory signals in the form of triplets, that are obtained
manually. However, it is infeasible to obtain manual annotations for the entire
huge collection of data usually present in fashion e-commerce platforms, such
as ours, while capturing all the difficult corner cases. But, to our rescue,
interestingly we observed that this crucial problem in fashion e-commerce could
also be solved by simple color jitter based image augmentation, that recently
became widely popular in the contrastive Self-Supervised Learning (SSL)
literature, that seeks to learn visual representations without using manual
labels. This naturally led to a question in our mind: Could we leverage SSL in
our use-case, and still obtain comparable performance to our supervised
framework? The answer is, Yes! because, color variant fashion objects are
nothing but manifestations of a style, in different colors, and a model trained
to be invariant to the color (with, or without supervision), should be able to
recognize this! This is what the paper further demonstrates, both
qualitatively, and quantitatively, while evaluating a couple of
state-of-the-art SSL techniques, and also proposing a novel method.
Related papers
- Exploring Color Invariance through Image-Level Ensemble Learning [7.254270666779331]
This study introduces a learning strategy named Random Color Erasing.
It selectively erases partial or complete color information in the training data without disrupting the original image structure.
This approach mitigates the risk of overfitting and enhances the model's ability to handle color variation.
arXiv Detail & Related papers (2024-01-19T06:04:48Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot
Learning [89.86971464234533]
Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains.
We propose a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method.
Our method is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets.
arXiv Detail & Related papers (2023-02-18T11:54:37Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain
Few-Shot Learning [95.78635058475439]
Cross-domain few-shot learning aims at transferring knowledge from general nature images to novel domain-specific target categories.
This paper studies the problem of CD-FSL by spanning the style distributions of the source dataset.
To make our model robust to visual styles, the source images are augmented by swapping the styles of their low-frequency components with each other.
arXiv Detail & Related papers (2022-03-15T05:36:41Z) - Formal Analysis of Art: Proxy Learning of Visual Concepts from Style
Through Language Models [10.854399031287393]
We present a machine learning system that can quantify fine art paintings with a set of visual elements and principles of art.
We introduce a novel mechanism, called proxy learning, which learns visual concepts in paintings though their general relation to styles.
arXiv Detail & Related papers (2022-01-05T21:03:29Z) - Semi-Supervised Visual Representation Learning for Fashion Compatibility [17.893627646979038]
We propose a semi-supervised learning approach to create pseudo-positive and pseudo-negative outfits on the fly during training.
For each labeled outfit in a training batch, we obtain a pseudo-outfit by matching each item in the labeled outfit with unlabeled items.
We conduct extensive experiments on Polyvore, Polyvore-D and our newly created large-scale Fashion Outfits datasets.
arXiv Detail & Related papers (2021-09-16T15:35:38Z) - Color Variants Identification via Contrastive Self-Supervised
Representation Learning [2.3449131636069898]
We utilize deep visual Representation Learning to address the problem of identification of color variants.
We propose a novel contrastive loss based self-supervised color variants model.
We evaluate our method both quantitatively and qualitatively to show that it outperforms existing self-supervised methods.
arXiv Detail & Related papers (2021-04-17T15:51:56Z) - Self-supervised Visual Attribute Learning for Fashion Compatibility [71.73414832639698]
We present an SSL framework that enables us to learn color and texture-aware features without requiring any labels during training.
Our approach consists of three self-supervised tasks designed to capture different concepts that are neglected in prior work.
We show that our approach can be used for transfer learning, demonstrating that we can train on one dataset while achieving high performance on a different dataset.
arXiv Detail & Related papers (2020-08-01T21:53:22Z) - Learning Diverse Fashion Collocation by Neural Graph Filtering [78.9188246136867]
We propose a novel fashion collocation framework, Neural Graph Filtering, that models a flexible set of fashion items via a graph neural network.
By applying symmetric operations on the edge vectors, this framework allows varying numbers of inputs/outputs and is invariant to their ordering.
We evaluate the proposed approach on three popular benchmarks, the Polyvore dataset, the Polyvore-D dataset, and our reorganized Amazon Fashion dataset.
arXiv Detail & Related papers (2020-03-11T16:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.