Grading video interviews with fairness considerations
- URL: http://arxiv.org/abs/2007.05461v1
- Date: Thu, 2 Jul 2020 10:06:13 GMT
- Title: Grading video interviews with fairness considerations
- Authors: Abhishek Singhania, Abhishek Unnam and Varun Aggarwal
- Abstract summary: We present a methodology to automatically derive social skills of candidates based on their video response to interview questions.
We develop two machine-learning models to predict social skills.
We analyze fairness by studying the errors of models by race and gender.
- Score: 1.7403133838762446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been considerable interest in predicting human emotions and traits
using facial images and videos. Lately, such work has come under criticism for
poor labeling practices, inconclusive prediction results and fairness
considerations. We present a careful methodology to automatically derive social
skills of candidates based on their video response to interview questions. We,
for the first time, include video data from multiple countries encompassing
multiple ethnicities. Also, the videos were rated by individuals from multiple
racial backgrounds, following several best practices, to achieve a consensus
and unbiased measure of social skills. We develop two machine-learning models
to predict social skills. The first model employs expert-guidance to use
plausibly causal features. The second uses deep learning and depends solely on
the empirical correlations present in the data. We compare errors of both these
models, study the specificity of the models and make recommendations. We
further analyze fairness by studying the errors of models by race and gender.
We verify the usefulness of our models by determining how well they predict
interview outcomes for candidates. Overall, the study provides strong support
for using artificial intelligence for video interview scoring, while taking
care of fairness and ethical considerations.
Related papers
- Dataset Scale and Societal Consistency Mediate Facial Impression Bias in Vision-Language AI [17.101569078791492]
We study 43 CLIP vision-language models to determine whether they learn human-like facial impression biases.
We show for the first time that the the degree to which a bias is shared across a society predicts the degree to which it is reflected in a CLIP model.
arXiv Detail & Related papers (2024-08-04T08:26:58Z) - VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model [72.13121434085116]
VLBiasBench is a benchmark aimed at evaluating biases in Large Vision-Language Models (LVLMs)
We construct a dataset encompassing nine distinct categories of social biases, including age, disability status, gender, nationality, physical appearance, race, religion, profession, social economic status and two intersectional bias categories (race x gender, and race x social economic status)
We conduct extensive evaluations on 15 open-source models as well as one advanced closed-source model, providing some new insights into the biases revealing from these models.
arXiv Detail & Related papers (2024-06-20T10:56:59Z) - Quantifying Bias in Text-to-Image Generative Models [49.60774626839712]
Bias in text-to-image (T2I) models can propagate unfair social representations and may be used to aggressively market ideas or push controversial agendas.
Existing T2I model bias evaluation methods only focus on social biases.
We propose an evaluation methodology to quantify general biases in T2I generative models, without any preconceived notions.
arXiv Detail & Related papers (2023-12-20T14:26:54Z) - Pre-trained Speech Processing Models Contain Human-Like Biases that
Propagate to Speech Emotion Recognition [4.4212441764241]
We present the Speech Embedding Association Test (SpEAT), a method for detecting bias in one type of model used for many speech tasks: pre-trained models.
Using the SpEAT, we test for six types of bias in 16 English speech models.
Our work provides evidence that, like text and image-based models, pre-trained speech based-models frequently learn human-like biases.
arXiv Detail & Related papers (2023-10-29T02:27:56Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Estimating the Personality of White-Box Language Models [0.589889361990138]
Large-scale language models, which are trained on large corpora of text, are being used in a wide range of applications everywhere.
Existing research shows that these models can and do capture human biases.
Many of these biases, especially those that could potentially cause harm, are being well-investigated.
However, studies that infer and change human personality traits inherited by these models have been scarce or non-existent.
arXiv Detail & Related papers (2022-04-25T23:53:53Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - DALL-Eval: Probing the Reasoning Skills and Social Biases of
Text-to-Image Generation Models [73.12069620086311]
We investigate the visual reasoning capabilities and social biases of text-to-image models.
First, we measure three visual reasoning skills: object recognition, object counting, and spatial relation understanding.
Second, we assess the gender and skin tone biases by measuring the gender/skin tone distribution of generated images.
arXiv Detail & Related papers (2022-02-08T18:36:52Z) - On the Basis of Sex: A Review of Gender Bias in Machine Learning
Applications [0.0]
We first introduce several examples of machine learning gender bias in practice.
We then detail the most widely used formalizations of fairness in order to address how to make machine learning models fairer.
arXiv Detail & Related papers (2021-04-06T14:11:16Z) - Detection and Mitigation of Bias in Ted Talk Ratings [3.3598755777055374]
Implicit bias is a behavioral conditioning that leads us to attribute predetermined characteristics to members of certain groups.
This paper quantifies implicit bias in viewer ratings of TEDTalks, a diverse social platform assessing social and professional performance.
arXiv Detail & Related papers (2020-03-02T06:13:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.