Position bias in features
- URL: http://arxiv.org/abs/2402.02626v1
- Date: Sun, 4 Feb 2024 22:15:30 GMT
- Title: Position bias in features
- Authors: Richard Demsyn-Jones
- Abstract summary: Document-specific historical click-through rates can be important features in a dynamic ranking system.
This paper describes the properties of several such features, and tests them in controlled experiments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The purpose of modeling document relevance for search engines is to rank
better in subsequent searches. Document-specific historical click-through rates
can be important features in a dynamic ranking system which updates as we
accumulate more sample. This paper describes the properties of several such
features, and tests them in controlled experiments. Extending the inverse
propensity weighting method to documents creates an unbiased estimate of
document relevance. This feature can approximate relevance accurately, leading
to near-optimal ranking in ideal circumstances. However, it has high variance
that is increasing with respect to the degree of position bias. Furthermore,
inaccurate position bias estimation leads to poor performance. Under several
scenarios this feature can perform worse than biased click-through rates. This
paper underscores the need for accurate position bias estimation, and is unique
in suggesting simultaneous use of biased and unbiased position bias features.
Related papers
- Eliminating Position Bias of Language Models: A Mechanistic Approach [119.34143323054143]
Position bias has proven to be a prevalent issue of modern language models (LMs)
We find that causal attention generally causes models to favor distant content, while relative positional encodings like RoPE prefer nearby ones.
We propose to ELIMINATE position bias caused by different input segment orders (e.g., options in LM-as-a-judge, retrieved documents in QA) in a TRAINING-FREE ZERO-SHOT manner.
arXiv Detail & Related papers (2024-07-01T09:06:57Z) - Measuring and Addressing Indexical Bias in Information Retrieval [69.7897730778898]
PAIR framework supports automatic bias audits for ranked documents or entire IR systems.
After introducing DUO, we run an extensive evaluation of 8 IR systems on a new corpus of 32k synthetic and 4.7k natural documents.
A human behavioral study validates our approach, showing that our bias metric can help predict when and how indexical bias will shift a reader's opinion.
arXiv Detail & Related papers (2024-06-06T17:42:37Z) - Mitigate Position Bias in Large Language Models via Scaling a Single Dimension [47.792435921037274]
This paper first explores the micro-level manifestations of position bias, concluding that attention weights are a micro-level expression of position bias.
It further identifies that, in addition to position embeddings, causal attention mask also contributes to position bias by creating position-specific hidden states.
Based on these insights, we propose a method to mitigate position bias by scaling this positional hidden states.
arXiv Detail & Related papers (2024-06-04T17:55:38Z) - Semantic Properties of cosine based bias scores for word embeddings [52.13994416317707]
We propose requirements for bias scores to be considered meaningful for quantifying biases.
We analyze cosine based scores from the literature with regard to these requirements.
We underline these findings with experiments to show that the bias scores' limitations have an impact in the application case.
arXiv Detail & Related papers (2024-01-27T20:31:10Z) - Measurement and applications of position bias in a marketplace search
engine [0.0]
Search engines intentionally influence user behavior by picking and ranking the list of results.
This paper describes our efforts at Thumbtack to understand the impact of ranking.
We include a novel discussion of how ranking bias may not only affect labels, but also model features.
arXiv Detail & Related papers (2022-06-23T14:09:58Z) - Improving Evaluation of Debiasing in Image Classification [29.711865666774017]
Our study indicates several issues need to be improved when conducting evaluation of debiasing in image classification.
Based on such issues, this paper proposes an evaluation metric Align-Conflict (AC) score' for the tuning criterion.
We believe our findings and lessons inspire future researchers in debiasing to further push state-of-the-art performances with fair comparisons.
arXiv Detail & Related papers (2022-06-08T05:24:13Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - The SAME score: Improved cosine based bias score for word embeddings [63.24247894974291]
We provide a bias definition based on the ideas from the literature and derive novel requirements for bias scores.
We propose a new bias score, SAME, to address the shortcomings of existing bias scores and show empirically that SAME is better suited to quantify biases in word embeddings.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Unbiased Pairwise Learning to Rank in Recommender Systems [4.058828240864671]
Unbiased learning to rank algorithms are appealing candidates and have already been applied in many applications with single categorical labels.
We propose a novel unbiased LTR algorithm to tackle the challenges, which innovatively models position bias in the pairwise fashion.
Experiment results on public benchmark datasets and internal live traffic show the superior results of the proposed method for both categorical and continuous labels.
arXiv Detail & Related papers (2021-11-25T06:04:59Z) - AutoDebias: Learning to Debias for Recommendation [43.84313723394282]
We propose textitAotoDebias that leverages another (small) set of uniform data to optimize the debiasing parameters.
We derive the generalization bound for AutoDebias and prove its ability to acquire the appropriate debiasing strategy.
arXiv Detail & Related papers (2021-05-10T08:03:48Z) - Mitigating the Position Bias of Transformer Models in Passage Re-Ranking [12.526786110360622]
Supervised machine learning models and their evaluation strongly depends on the quality of the underlying dataset.
We observe a bias in the position of the correct answer in the text in two popular Question Answering datasets used for passage re-ranking.
We demonstrate that by mitigating the position bias, Transformer-based re-ranking models are equally effective on a biased and debiased dataset.
arXiv Detail & Related papers (2021-01-18T10:38:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.