Learning Robust Models for e-Commerce Product Search
- URL: http://arxiv.org/abs/2005.03624v1
- Date: Thu, 7 May 2020 17:22:21 GMT
- Title: Learning Robust Models for e-Commerce Product Search
- Authors: Thanh V. Nguyen, Nikhil Rao and Karthik Subbian
- Abstract summary: Showing items that do not match search query intent degrades customer experience in e-commerce.
Mitigating the problem requires a large labeled dataset.
We develop a deep, end-to-end model that learns to effectively classify mismatches.
- Score: 23.537201383165755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Showing items that do not match search query intent degrades customer
experience in e-commerce. These mismatches result from counterfactual biases of
the ranking algorithms toward noisy behavioral signals such as clicks and
purchases in the search logs. Mitigating the problem requires a large labeled
dataset, which is expensive and time-consuming to obtain. In this paper, we
develop a deep, end-to-end model that learns to effectively classify mismatches
and to generate hard mismatched examples to improve the classifier. We train
the model end-to-end by introducing a latent variable into the cross-entropy
loss that alternates between using the real and generated samples. This not
only makes the classifier more robust but also boosts the overall ranking
performance. Our model achieves a relative gain compared to baselines by over
26% in F-score, and over 17% in Area Under PR curve. On live search traffic,
our model gains significant improvement in multiple countries.
Related papers
- Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Multi-output Headed Ensembles for Product Item Classification [0.9053163124987533]
We propose a deep learning based classification model framework for e-commerce catalogs.
We show improvements against robust industry standard baseline models.
We also propose a novel way to evaluate model performance using user sessions.
arXiv Detail & Related papers (2023-07-29T01:23:36Z) - Unified Embedding Based Personalized Retrieval in Etsy Search [0.206242362470764]
We propose learning a unified embedding model incorporating graph, transformer and term-based embeddings end to end.
Our personalized retrieval model significantly improves the overall search experience, as measured by a 5.58% increase in search purchase rate and a 2.63% increase in site-wide conversion rate.
arXiv Detail & Related papers (2023-06-07T23:24:50Z) - Consistent Text Categorization using Data Augmentation in e-Commerce [1.558017967663767]
We propose a new framework for consistent text categorization.
Our goal is to improve the model's consistency while maintaining its production-level performance.
arXiv Detail & Related papers (2023-05-09T12:47:28Z) - Smoothly Giving up: Robustness for Simple Models [30.56684535186692]
Examples of algorithms to train such models include logistic regression and boosting.
We use $Served-Served joint convex loss functions, which tunes between canonical convex loss functions, to robustly train such models.
We also provide results for boosting a COVID-19 dataset for logistic regression, highlighting the efficacy approach across multiple relevant domains.
arXiv Detail & Related papers (2023-02-17T19:48:11Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - On the Efficacy of Adversarial Data Collection for Question Answering:
Results from a Large-Scale Randomized Study [65.17429512679695]
In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.
Despite ADC's intuitive appeal, it remains unclear when training on adversarial datasets produces more robust models.
arXiv Detail & Related papers (2021-06-02T00:48:33Z) - Heterogeneous Network Embedding for Deep Semantic Relevance Match in
E-commerce Search [29.881612817309716]
We design an end-to-end First-and-Second-order Relevance prediction model for e-commerce item relevance.
We introduce external knowledge generated from BERT to refine the network of user behaviors.
Results of offline experiments showed that the new model significantly improved the prediction accuracy in terms of human relevance judgment.
arXiv Detail & Related papers (2021-01-13T03:12:53Z) - Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on
Higher-Order Voronoi Diagrams [69.4411417775822]
Adversarial examples are a widely studied phenomenon in machine learning models.
We propose an algorithm for evaluating the adversarial robustness of $k$-nearest neighbor classification.
arXiv Detail & Related papers (2020-11-19T08:49:10Z) - The Devil is in Classification: A Simple Framework for Long-tail Object
Detection and Instance Segmentation [93.17367076148348]
We investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset.
We unveil that a major cause is the inaccurate classification of object proposals.
We propose a simple calibration framework to more effectively alleviate classification head bias with a bi-level class balanced sampling approach.
arXiv Detail & Related papers (2020-07-23T12:49:07Z) - AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses [97.50616524350123]
We build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering.
The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch.
The second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level.
The third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal.
arXiv Detail & Related papers (2020-01-15T18:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.