FE-TCM: Filter-Enhanced Transformer Click Model for Web Search
- URL: http://arxiv.org/abs/2301.07854v1
- Date: Thu, 19 Jan 2023 02:51:47 GMT
- Title: FE-TCM: Filter-Enhanced Transformer Click Model for Web Search
- Authors: Yingfei Wang and Jianping Liu and Meng Wang and Xintao Chu
- Abstract summary: We use Transformer as the backbone network of feature extraction, add filter layer innovatively, and propose a new Filter-Enhanced Transformer Click Model (FE-TCM) for web search.
FE-TCM outperforms the existing click models for the click prediction.
- Score: 10.91456636784484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constructing click models and extracting implicit relevance feedback
information from the interaction between users and search engines are very
important to improve the ranking of search results. Using neural network to
model users' click behaviors has become one of the effective methods to
construct click models. In this paper, We use Transformer as the backbone
network of feature extraction, add filter layer innovatively, and propose a new
Filter-Enhanced Transformer Click Model (FE-TCM) for web search. Firstly, in
order to reduce the influence of noise on user behavior data, we use the
learnable filters to filter log noise. Secondly, following the examination
hypothesis, we model the attraction estimator and examination predictor
respectively to output the attractiveness scores and examination probabilities.
A novel transformer model is used to learn the deeper representation among
different features. Finally, we apply the combination functions to integrate
attractiveness scores and examination probabilities into the click prediction.
From our experiments on two real-world session datasets, it is proved that
FE-TCM outperforms the existing click models for the click prediction.
Related papers
- Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study [61.64685376882383]
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models.
This paper investigates the robustness of existing CLTR models in complex and diverse situations.
We find that the DLA models and IPS-DCM show better robustness under various simulation settings than IPS-PBM and PRS with offline propensity estimation.
arXiv Detail & Related papers (2024-04-04T10:54:38Z) - RAT: Retrieval-Augmented Transformer for Click-Through Rate Prediction [68.34355552090103]
This paper develops a Retrieval-Augmented Transformer (RAT), aiming to acquire fine-grained feature interactions within and across samples.
We then build Transformer layers with cascaded attention to capture both intra- and cross-sample feature interactions.
Experiments on real-world datasets substantiate the effectiveness of RAT and suggest its advantage in long-tail scenarios.
arXiv Detail & Related papers (2024-04-02T19:14:23Z) - MAP: A Model-agnostic Pretraining Framework for Click-through Rate
Prediction [39.48740397029264]
We propose a Model-agnostic pretraining (MAP) framework that applies feature corruption and recovery on multi-field categorical data.
We derive two practical algorithms: masked feature prediction (RFD) and replaced feature detection (RFD)
arXiv Detail & Related papers (2023-08-03T12:55:55Z) - Meta-Wrapper: Differentiable Wrapping Operator for User Interest
Selection in CTR Prediction [97.99938802797377]
Click-through rate (CTR) prediction, whose goal is to predict the probability of the user to click on an item, has become increasingly significant in recommender systems.
Recent deep learning models with the ability to automatically extract the user interest from his/her behaviors have achieved great success.
We propose a novel approach under the framework of the wrapper method, which is named Meta-Wrapper.
arXiv Detail & Related papers (2022-06-28T03:28:15Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Scalar is Not Enough: Vectorization-based Unbiased Learning to Rank [29.934700345584726]
Unbiased learning to rank aims to train an unbiased ranking model from biased user click logs.
Most of the current ULTR methods are based on the examination hypothesis (EH), which assumes that the click probability can be factorized into two scalar functions.
We propose a vector-based EH and formulate the click probability as a dot product of two vector functions.
arXiv Detail & Related papers (2022-06-03T17:23:25Z) - Masked Transformer for Neighhourhood-aware Click-Through Rate Prediction [74.52904110197004]
We propose Neighbor-Interaction based CTR prediction, which put this task into a Heterogeneous Information Network (HIN) setting.
In order to enhance the representation of the local neighbourhood, we consider four types of topological interaction among the nodes.
We conduct comprehensive experiments on two real world datasets and the experimental results show that our proposed method outperforms state-of-the-art CTR models significantly.
arXiv Detail & Related papers (2022-01-25T12:44:23Z) - FAIRS -- Soft Focus Generator and Attention for Robust Object
Segmentation from Extreme Points [70.65563691392987]
We present a new approach to generate object segmentation from user inputs in the form of extreme points and corrective clicks.
We demonstrate our method's ability to generate high-quality training data as well as its scalability in incorporating extreme points, guiding clicks, and corrective clicks in a principled manner.
arXiv Detail & Related papers (2020-04-04T22:25:47Z) - Gradient-Based Adversarial Training on Transformer Networks for
Detecting Check-Worthy Factual Claims [3.7543966923106438]
We introduce the first adversarially-regularized, transformer-based claim spotter model.
We obtain a 4.70 point F1-score improvement over current state-of-the-art models.
We propose a method to apply adversarial training to transformer models.
arXiv Detail & Related papers (2020-02-18T16:51:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.