Grading video interviews with fairness considerations
- URL: http://arxiv.org/abs/2007.05461v1
- Date: Thu, 2 Jul 2020 10:06:13 GMT
- Title: Grading video interviews with fairness considerations
- Authors: Abhishek Singhania, Abhishek Unnam and Varun Aggarwal
- Abstract summary: We present a methodology to automatically derive social skills of candidates based on their video response to interview questions.
We develop two machine-learning models to predict social skills.
We analyze fairness by studying the errors of models by race and gender.
- Score: 1.7403133838762446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been considerable interest in predicting human emotions and traits
using facial images and videos. Lately, such work has come under criticism for
poor labeling practices, inconclusive prediction results and fairness
considerations. We present a careful methodology to automatically derive social
skills of candidates based on their video response to interview questions. We,
for the first time, include video data from multiple countries encompassing
multiple ethnicities. Also, the videos were rated by individuals from multiple
racial backgrounds, following several best practices, to achieve a consensus
and unbiased measure of social skills. We develop two machine-learning models
to predict social skills. The first model employs expert-guidance to use
plausibly causal features. The second uses deep learning and depends solely on
the empirical correlations present in the data. We compare errors of both these
models, study the specificity of the models and make recommendations. We
further analyze fairness by studying the errors of models by race and gender.
We verify the usefulness of our models by determining how well they predict
interview outcomes for candidates. Overall, the study provides strong support
for using artificial intelligence for video interview scoring, while taking
care of fairness and ethical considerations.
Related papers
- Fact-or-Fair: A Checklist for Behavioral Testing of AI Models on Fairness-Related Queries [85.909363478929]
In this study, we focus on 19 real-world statistics collected from authoritative sources.
We develop a checklist comprising objective and subjective queries to analyze behavior of large language models.
We propose metrics to assess factuality and fairness, and formally prove the inherent trade-off between these two aspects.
arXiv Detail & Related papers (2025-02-09T10:54:11Z) - Fair Knowledge Tracing in Second Language Acquisition [3.7498611358320733]
This study evaluates the fairness of two predictive models using the Duolingo dataset's en_es (English learners speaking Spanish), es_en (Spanish learners speaking English), and fr_en (French learners speaking English) tracks.
Deep learning outperforms machine learning in second-language knowledge tracing due to improved accuracy and fairness.
arXiv Detail & Related papers (2024-12-23T23:47:40Z) - VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model [72.13121434085116]
We introduce VLBiasBench, a benchmark to evaluate biases in Large Vision-Language Models (LVLMs)
VLBiasBench features a dataset that covers nine distinct categories of social biases, including age, disability status, gender, nationality, physical appearance, race, religion, profession, social economic status, as well as two intersectional bias categories: race x gender and race x social economic status.
We conduct extensive evaluations on 15 open-source models as well as two advanced closed-source models, yielding new insights into the biases present in these models.
arXiv Detail & Related papers (2024-06-20T10:56:59Z) - Quantifying Bias in Text-to-Image Generative Models [49.60774626839712]
Bias in text-to-image (T2I) models can propagate unfair social representations and may be used to aggressively market ideas or push controversial agendas.
Existing T2I model bias evaluation methods only focus on social biases.
We propose an evaluation methodology to quantify general biases in T2I generative models, without any preconceived notions.
arXiv Detail & Related papers (2023-12-20T14:26:54Z) - Pre-trained Speech Processing Models Contain Human-Like Biases that
Propagate to Speech Emotion Recognition [4.4212441764241]
We present the Speech Embedding Association Test (SpEAT), a method for detecting bias in one type of model used for many speech tasks: pre-trained models.
Using the SpEAT, we test for six types of bias in 16 English speech models.
Our work provides evidence that, like text and image-based models, pre-trained speech based-models frequently learn human-like biases.
arXiv Detail & Related papers (2023-10-29T02:27:56Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - DALL-Eval: Probing the Reasoning Skills and Social Biases of
Text-to-Image Generation Models [73.12069620086311]
We investigate the visual reasoning capabilities and social biases of text-to-image models.
First, we measure three visual reasoning skills: object recognition, object counting, and spatial relation understanding.
Second, we assess the gender and skin tone biases by measuring the gender/skin tone distribution of generated images.
arXiv Detail & Related papers (2022-02-08T18:36:52Z) - On the Basis of Sex: A Review of Gender Bias in Machine Learning
Applications [0.0]
We first introduce several examples of machine learning gender bias in practice.
We then detail the most widely used formalizations of fairness in order to address how to make machine learning models fairer.
arXiv Detail & Related papers (2021-04-06T14:11:16Z) - Detection and Mitigation of Bias in Ted Talk Ratings [3.3598755777055374]
Implicit bias is a behavioral conditioning that leads us to attribute predetermined characteristics to members of certain groups.
This paper quantifies implicit bias in viewer ratings of TEDTalks, a diverse social platform assessing social and professional performance.
arXiv Detail & Related papers (2020-03-02T06:13:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.