A Multi-Granularity Matching Attention Network for Query Intent
Classification in E-commerce Retrieval
- URL: http://arxiv.org/abs/2303.15870v1
- Date: Tue, 28 Mar 2023 10:25:17 GMT
- Title: A Multi-Granularity Matching Attention Network for Query Intent
Classification in E-commerce Retrieval
- Authors: Chunyuan Yuan, Yiming Qiu, Mingming Li, Haiqing Hu, Songlin Wang,
Sulong Xu
- Abstract summary: This paper proposes a Multi-granularity Matching Attention Network (MMAN) for query intent classification.
MMAN contains three modules: a self-matching module, a char-level matching module, and a semantic-level matching module.
We conduct extensive offline and online A/B experiments, and the results show that the MMAN significantly outperforms the strong baselines.
- Score: 9.034096715927731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Query intent classification, which aims at assisting customers to find
desired products, has become an essential component of the e-commerce search.
Existing query intent classification models either design more exquisite models
to enhance the representation learning of queries or explore label-graph and
multi-task to facilitate models to learn external information. However, these
models cannot capture multi-granularity matching features from queries and
categories, which makes them hard to mitigate the gap in the expression between
informal queries and categories.
This paper proposes a Multi-granularity Matching Attention Network (MMAN),
which contains three modules: a self-matching module, a char-level matching
module, and a semantic-level matching module to comprehensively extract
features from the query and a query-category interaction matrix. In this way,
the model can eliminate the difference in expression between queries and
categories for query intent classification. We conduct extensive offline and
online A/B experiments, and the results show that the MMAN significantly
outperforms the strong baselines, which shows the superiority and effectiveness
of MMAN. MMAN has been deployed in production and brings great commercial value
for our company.
Related papers
- MMCL: Boosting Deformable DETR-Based Detectors with Multi-Class Min-Margin Contrastive Learning for Superior Prohibited Item Detection [8.23801404004195]
Prohibited Item detection in X-ray images is one of the most effective security inspection methods.
overlapping unique phenomena in X-ray images lead to the coupling of foreground and background features.
We propose a Multi-Class Min-Margin Contrastive Learning (MMCL) method to clarify the category semantic information of content queries.
arXiv Detail & Related papers (2024-06-05T12:07:58Z) - Multi-modal Auto-regressive Modeling via Visual Words [96.25078866446053]
We propose the concept of visual words, which maps the visual features to probability distributions over Large Multi-modal Models' vocabulary.
We further explore the distribution of visual features in the semantic space within LMM and the possibility of using text embeddings to represent visual information.
arXiv Detail & Related papers (2024-03-12T14:58:52Z) - Ada-Retrieval: An Adaptive Multi-Round Retrieval Paradigm for Sequential
Recommendations [50.03560306423678]
We propose Ada-Retrieval, an adaptive multi-round retrieval paradigm for recommender systems.
Ada-Retrieval iteratively refines user representations to better capture potential candidates in the full item space.
arXiv Detail & Related papers (2024-01-12T15:26:40Z) - Beyond Semantics: Learning a Behavior Augmented Relevance Model with
Self-supervised Learning [25.356999988217325]
Relevance modeling aims to locate desirable items for corresponding queries.
auxiliary query-item interactions extracted from user historical behavior data could provide hints to reveal users' search intents further.
Our model builds multi-level co-attention for distilling coarse-grained and fine-grained semantic representations from both neighbor and target views.
arXiv Detail & Related papers (2023-08-10T06:52:53Z) - Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction [13.454953507205278]
Multi-Modal Relation Extraction aims at identifying the relation between two entities in texts that contain visual clues.
We propose a novel MMRE framework to better capture the deeper correlations of text, entity pair, and image/objects.
Our approach achieves excellent performance compared to strong competitors, even in the few-shot situation.
arXiv Detail & Related papers (2023-06-19T15:31:34Z) - Named Entity and Relation Extraction with Multi-Modal Retrieval [51.660650522630526]
Multi-modal named entity recognition (NER) and relation extraction (RE) aim to leverage relevant image information to improve the performance of NER and RE.
We propose a novel Multi-modal Retrieval based framework (MoRe)
MoRe contains a text retrieval module and an image-based retrieval module, which retrieve related knowledge of the input text and image in the knowledge corpus respectively.
arXiv Detail & Related papers (2022-12-03T13:11:32Z) - Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
Retrieval [152.3504607706575]
This research aims to conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.
We first contribute the Product1M datasets, and define two real practical instance-level retrieval tasks.
We exploit to train a more effective cross-modal model which is adaptively capable of incorporating key concept information from the multi-modal data.
arXiv Detail & Related papers (2022-06-17T15:40:45Z) - Semantic Representation and Dependency Learning for Multi-Label Image
Recognition [76.52120002993728]
We propose a novel and effective semantic representation and dependency learning (SRDL) framework to learn category-specific semantic representation for each category.
Specifically, we design a category-specific attentional regions (CAR) module to generate channel/spatial-wise attention matrices to guide model.
We also design an object erasing (OE) module to implicitly learn semantic dependency among categories by erasing semantic-aware regions.
arXiv Detail & Related papers (2022-04-08T00:55:15Z) - Extending CLIP for Category-to-image Retrieval in E-commerce [36.386210802938656]
E-commerce provides rich multimodal data that is barely leveraged in practice.
In practice, there is often a mismatch between a textual and a visual representation of a given category.
We introduce the task of category-to-image retrieval in e-commerce and propose a model for the task, CLIP-ITA.
arXiv Detail & Related papers (2021-12-21T15:33:23Z) - APRF-Net: Attentive Pseudo-Relevance Feedback Network for Query
Categorization [12.634704014206294]
We propose a novel deep neural model named textbfAttentive textbfPseudo textbfRelevance textbfFeedback textbfNetwork (APRF-Net) to enhance the representation of rare queries for query categorization.
Our results show that the APRF-Net significantly improves query categorization by 5.9% on $F1@1$ score over the baselines, which increases to 8.2% improvement for the rare queries.
arXiv Detail & Related papers (2021-04-23T02:34:08Z) - Query Focused Multi-Document Summarization with Distant Supervision [88.39032981994535]
Existing work relies heavily on retrieval-style methods for estimating the relevance between queries and text segments.
We propose a coarse-to-fine modeling framework which introduces separate modules for estimating whether segments are relevant to the query.
We demonstrate that our framework outperforms strong comparison systems on standard QFS benchmarks.
arXiv Detail & Related papers (2020-04-06T22:35:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.