G-STO: Sequential Main Shopping Intention Detection via
Graph-Regularized Stochastic Transformer
- URL: http://arxiv.org/abs/2306.14314v1
- Date: Sun, 25 Jun 2023 19:02:31 GMT
- Title: G-STO: Sequential Main Shopping Intention Detection via
Graph-Regularized Stochastic Transformer
- Authors: Yuchen Zhuang, Xin Shen, Yan Zhao, Chaosheng Dong, Ming Wang, Jin Li,
Chao Zhang
- Abstract summary: The area of main shopping intention detection remains under-investigated in the academic literature.
We develop a global relational graph as prior knowledge for regularization, allowing relevant shopping intentions to be distributionally close.
We evaluate our main shopping intention identification model on three different real-world datasets.
- Score: 20.415439583899847
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sequential recommendation requires understanding the dynamic patterns of
users' behaviors, contexts, and preferences from their historical interactions.
Most existing works focus on modeling user-item interactions only from the item
level, ignoring that they are driven by latent shopping intentions (e.g.,
ballpoint pens, miniatures, etc). The detection of the underlying shopping
intentions of users based on their historical interactions is a crucial aspect
for e-commerce platforms, such as Amazon, to enhance the convenience and
efficiency of their customers' shopping experiences. Despite its significance,
the area of main shopping intention detection remains under-investigated in the
academic literature. To fill this gap, we propose a graph-regularized
stochastic Transformer method, G-STO. By considering intentions as sets of
products and user preferences as compositions of intentions, we model both of
them as stochastic Gaussian embeddings in the latent representation space.
Instead of training the stochastic representations from scratch, we develop a
global intention relational graph as prior knowledge for regularization,
allowing relevant shopping intentions to be distributionally close. Finally, we
feed the newly regularized stochastic embeddings into Transformer-based models
to encode sequential information from the intention transitions. We evaluate
our main shopping intention identification model on three different real-world
datasets, where G-STO achieves significantly superior performances to the
baselines by 18.08% in Hit@1, 7.01% in Hit@10, and 6.11% in NDCG@10 on average.
Related papers
- Intent-Aware Neural Query Reformulation for Behavior-Aligned Product Search [0.0]
This work introduces a robust data pipeline designed to mine and analyze large-scale buyer query logs.<n>The pipeline systematically captures patterns indicative of latent purchase intent, enabling the construction of a high-fidelity, intent-rich dataset.<n>Our findings highlight the value of intent-centric modeling in bridging the gap between sparse user inputs and complex product discovery goals.
arXiv Detail & Related papers (2025-07-29T20:20:07Z) - NAM: A Normalization Attention Model for Personalized Product Search In Fliggy [14.447458070745231]
We propose a Normalization Attention Model (NAM) for personalized product search.<n>We show that our proposed NAM model significantly outperforms state-of-the-art baseline models.
arXiv Detail & Related papers (2025-06-10T02:46:05Z) - Scaling Sequential Recommendation Models with Transformers [0.0]
We take inspiration from the scaling laws observed in training large language models, and explore similar principles for sequential recommendation.
Compute-optimal training is possible but requires a careful analysis of the compute-performance trade-offs specific to the application.
We also show that performance scaling translates to downstream tasks by fine-tuning larger pre-trained models on smaller task-specific domains.
arXiv Detail & Related papers (2024-12-10T15:20:56Z) - Stanceformer: Target-Aware Transformer for Stance Detection [59.69858080492586]
Stance Detection involves discerning the stance expressed in a text towards a specific subject or target.
Prior works have relied on existing transformer models that lack the capability to prioritize targets effectively.
We introduce Stanceformer, a target-aware transformer model that incorporates enhanced attention towards the targets during both training and inference.
arXiv Detail & Related papers (2024-10-09T17:24:28Z) - Exploring the Individuality and Collectivity of Intents behind Interactions for Graph Collaborative Filtering [9.740376003100437]
We propose a novel recommendation framework designated as Bilateral Intent-guided Graph Collaborative Filtering (BIGCF)
Specifically, we take a closer look at user-item interactions from a causal perspective and put forth the concepts of individual intent.
To counter the sparsity of implicit feedback, the feature distributions of users and items are encoded via a Gaussian-based graph generation strategy.
arXiv Detail & Related papers (2024-05-15T02:31:26Z) - Revolutionizing Retail Analytics: Advancing Inventory and Customer Insight with AI [0.0]
This paper introduces an innovative approach utilizing cutting-edge machine learning technologies.
We aim to create an advanced smart retail analytics system (SRAS), leveraging these technologies to enhance retail efficiency and customer engagement.
arXiv Detail & Related papers (2024-02-24T11:03:01Z) - A Dynamic Graph Interactive Framework with Label-Semantic Injection for
Spoken Language Understanding [43.48113981442722]
We propose a framework termed DGIF, which first leverages the semantic information of labels to give the model additional signals and enriched priors.
We propose a novel approach to construct the interactive graph based on the injection of label semantics, which can automatically update the graph to better alleviate error propagation.
arXiv Detail & Related papers (2022-11-08T05:57:46Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Learning Dynamic Compact Memory Embedding for Deformable Visual Object
Tracking [82.34356879078955]
We propose a compact memory embedding to enhance the discrimination of the segmentation-based deformable visual tracking method.
Our method outperforms the excellent segmentation-based trackers, i.e., D3S and SiamMask on DAVIS 2017 benchmark.
arXiv Detail & Related papers (2021-11-23T03:07:12Z) - Dynamic Sequential Graph Learning for Click-Through Rate Prediction [29.756257920214168]
We propose a novel method to enhance users' representations by utilizing collaborative information from the local sub-graphs associated with users or items.
Results on real-world CTR prediction benchmarks demonstrate the improvements brought by DSGL.
arXiv Detail & Related papers (2021-09-26T09:23:43Z) - Glance and Gaze: Inferring Action-aware Points for One-Stage
Human-Object Interaction Detection [81.32280287658486]
We propose a novel one-stage method, namely Glance and Gaze Network (GGNet)
GGNet adaptively models a set of actionaware points (ActPoints) via glance and gaze steps.
We design an actionaware approach that effectively matches each detected interaction with its associated human-object pair.
arXiv Detail & Related papers (2021-04-12T08:01:04Z) - Disentangled Graph Collaborative Filtering [100.26835145396782]
Disentangled Graph Collaborative Filtering (DGCF) is a new model for learning informative representations of users and items from interaction data.
By modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations.
DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE.
arXiv Detail & Related papers (2020-07-03T15:37:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.