A Dataset for Learning Graph Representations to Predict Customer Returns
in Fashion Retail
- URL: http://arxiv.org/abs/2302.14096v1
- Date: Mon, 27 Feb 2023 19:14:37 GMT
- Title: A Dataset for Learning Graph Representations to Predict Customer Returns
in Fashion Retail
- Authors: Jamie McGowan, Elizabeth Guest, Ziyang Yan, Cong Zheng, Neha Patel,
Mason Cusack, Charlie Donaldson, Sofie de Cnudde, Gabriel Facini and Fabon
Dzogang
- Abstract summary: We present a novel dataset collected by ASOS to address the challenge of predicting customer returns in a fashion retail ecosystem.
We first explore the structure of this dataset with a focus on the application of Graph Representation Learning.
We show examples of a return prediction classification task with a selection of baseline models and a graph representation based model.
- Score: 0.243788455857269
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel dataset collected by ASOS (a major online fashion
retailer) to address the challenge of predicting customer returns in a fashion
retail ecosystem. With the release of this substantial dataset we hope to
motivate further collaboration between research communities and the fashion
industry. We first explore the structure of this dataset with a focus on the
application of Graph Representation Learning in order to exploit the natural
data structure and provide statistical insights into particular features within
the data. In addition to this, we show examples of a return prediction
classification task with a selection of baseline models (i.e. with no
intermediate representation learning step) and a graph representation based
model. We show that in a downstream return prediction classification task, an
F1-score of 0.792 can be found using a Graph Neural Network (GNN), improving
upon other models discussed in this work. Alongside this increased F1-score, we
also present a lower cross-entropy loss by recasting the data into a graph
structure, indicating more robust predictions from a GNN based solution. These
results provide evidence that GNNs could provide more impactful and usable
classifications than other baseline models on the presented dataset and with
this motivation, we hope to encourage further research into graph-based
approaches using the ASOS GraphReturns dataset.
Related papers
- TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation Learning [7.879217146851148]
We propose an innovative Graph Neural Network (GNN) architecture that integrates a Top-m attention mechanism aggregation component and a neighborhood aggregation component.
To assess the effectiveness of our proposed model, we have applied it to citation sentiment prediction, a novel task previously unexplored in the GNN field.
arXiv Detail & Related papers (2024-11-23T05:31:25Z) - Novel Representation Learning Technique using Graphs for Performance
Analytics [0.0]
We propose a novel idea of transforming performance data into graphs to leverage the advancement of Graph Neural Network-based (GNN) techniques.
In contrast to other Machine Learning application domains, such as social networks, the graph is not given; instead, we need to build it.
We evaluate the effectiveness of the generated embeddings from GNNs based on how well they make even a simple feed-forward neural network perform for regression tasks.
arXiv Detail & Related papers (2024-01-19T16:34:37Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Towards a Taxonomy of Graph Learning Datasets [10.151886932716518]
Graph neural networks (GNNs) have attracted much attention due to their ability to leverage the intrinsic geometries of the underlying data.
Here, we provide a principled approach to taxonomize graph benchmarking datasets by carefully designing a collection of graph perturbations.
Our data-driven taxonomization of graph datasets provides a new understanding of critical dataset characteristics.
arXiv Detail & Related papers (2021-10-27T23:08:01Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.