Overview of the TREC 2020 Fair Ranking Track
- URL: http://arxiv.org/abs/2108.05135v1
- Date: Wed, 11 Aug 2021 10:22:05 GMT
- Title: Overview of the TREC 2020 Fair Ranking Track
- Authors: Asia J. Biega, Fernando Diaz, Michael D. Ekstrand, Sergey Feldman,
Sebastian Kohlmeier
- Abstract summary: This paper provides an overview of the NIST TREC 2020 Fair Ranking track.
The central goal of the Fair Ranking track is to provide fair exposure to different groups of authors.
- Score: 64.16623297717642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper provides an overview of the NIST TREC 2020 Fair Ranking track. For
2020, we again adopted an academic search task, where we have a corpus of
academic article abstracts and queries submitted to a production academic
search engine. The central goal of the Fair Ranking track is to provide fair
exposure to different groups of authors (a group fairness framing). We
recognize that there may be multiple group definitions (e.g. based on
demographics, stature, topic) and hoped for the systems to be robust to these.
We expected participants to develop systems that optimize for fairness and
relevance for arbitrary group definitions, and did not reveal the exact group
definitions until after the evaluation runs were submitted.The track contains
two tasks,reranking and retrieval, with a shared evaluation.
Related papers
- Full Stage Learning to Rank: A Unified Framework for Multi-Stage Systems [40.199257203898846]
We propose an improved ranking principle for multi-stage systems, namely the Generalized Probability Ranking Principle (GPRP)
GPRP emphasizes both the selection bias in each stage of the system pipeline as well as the underlying interest of users.
Our core idea is to first estimate the selection bias in the subsequent stages and then learn a ranking model that best complies with the downstream modules' selection bias.
arXiv Detail & Related papers (2024-05-08T06:35:04Z) - Towards Group-aware Search Success [12.281168800322458]
We introduce a novel metric, named Group-aware Search Success (GA-SS)
GA-SS redefines search success to ensure that all demographic groups achieve satisfaction from search outcomes.
We empirically validate our metric and approach with two real-world datasets.
arXiv Detail & Related papers (2024-04-26T10:45:34Z) - BLP-2023 Task 2: Sentiment Analysis [7.725694295666573]
We present an overview of the BLP Sentiment Shared Task, organized as part of the inaugural BLP 2023 workshop.
The task is defined as the detection of sentiment in a given piece of social media text.
This paper provides a detailed account of the task setup, including dataset development and evaluation setup.
arXiv Detail & Related papers (2023-10-24T21:00:41Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Blackbox Post-Processing for Multiclass Fairness [1.5305403478254664]
We consider modifying the predictions of a blackbox machine learning classifier in order to achieve fairness in a multiclass setting.
We explore when our approach produces both fair and accurate predictions through systematic synthetic experiments.
We find that overall, our approach produces minor drops in accuracy and enforces fairness when the number of individuals in the dataset is high.
arXiv Detail & Related papers (2022-01-12T13:21:20Z) - RUSSE'2020: Findings of the First Taxonomy Enrichment Task for the
Russian language [70.27072729280528]
This paper describes the results of the first shared task on taxonomy enrichment for the Russian language.
16 teams participated in the task demonstrating high results with more than half of them outperforming the provided baseline.
arXiv Detail & Related papers (2020-05-22T13:30:37Z) - Overview of the TREC 2019 Fair Ranking Track [65.15263872493799]
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers.
This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process.
arXiv Detail & Related papers (2020-03-25T21:34:58Z) - Recognizing Families In the Wild: White Paper for the 4th Edition Data
Challenge [91.55319616114943]
This paper summarizes the supported tasks (i.e., kinship verification, tri-subject verification, and search & retrieval of missing children) in the Recognizing Families In the Wild (RFIW) evaluation.
The purpose of this paper is to describe the 2020 RFIW challenge, end-to-end, along with forecasts in promising future directions.
arXiv Detail & Related papers (2020-02-15T02:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.