Hiring Fairly in the Age of Algorithms
- URL: http://arxiv.org/abs/2004.07132v1
- Date: Wed, 15 Apr 2020 14:58:52 GMT
- Title: Hiring Fairly in the Age of Algorithms
- Authors: Max Langenkamp, Allan Costa, Chris Cheung
- Abstract summary: We argue that the negative impact of hiring algorithms can be mitigated by greater transparency from the employers to the public.
Our main contribution is a framework for automated hiring transparency, algorithmic transparency reports.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Widespread developments in automation have reduced the need for human input.
However, despite the increased power of machine learning, in many contexts
these programs make decisions that are problematic. Biases within data and
opaque models have amplified human prejudices, giving rise to such tools as
Amazon's (now defunct) experimental hiring algorithm, which was found to
consistently downgrade resumes when the word "women's" was added before an
activity. This article critically surveys the existing legal and technological
landscape surrounding algorithmic hiring. We argue that the negative impact of
hiring algorithms can be mitigated by greater transparency from the employers
to the public, which would enable civil advocate groups to hold employers
accountable, as well as allow the U.S. Department of Justice to litigate. Our
main contribution is a framework for automated hiring transparency, algorithmic
transparency reports, which employers using automated hiring software would be
required to publish by law. We also explain how existing regulations in
employment and trade secret law can be extended by the Equal Employment
Opportunity Commission and Congress to accommodate these reports.
Related papers
- Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Reputational Algorithm Aversion [0.0]
This paper shows how algorithm aversion arises when the choice to follow an algorithm conveys information about a human's ability.
I develop a model in which workers make forecasts of an uncertain outcome based on their own private information and an algorithm's signal.
arXiv Detail & Related papers (2024-02-23T16:28:55Z) - Fairness and Bias in Algorithmic Hiring: a Multidisciplinary Survey [43.463169774689646]
This survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness.
Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders.
arXiv Detail & Related papers (2023-09-25T08:04:18Z) - National Origin Discrimination in Deep-learning-powered Automated Resume
Screening [3.251347385432286]
Many companies and organizations have started to use some form of AIenabled auto mated tools to assist in their hiring process.
There are increasing concerns on unfair treatment to candidates, caused by underlying bias in AI systems.
This study examined deep learning methods, a recent technology breakthrough, with focus on their application to automated resume screening.
arXiv Detail & Related papers (2023-07-13T01:35:29Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay
of Human and Algorithmic Biases in Online Hiring [9.21721532941863]
We analyze various sources of gender biases in online hiring platforms, including the job context and inherent biases of the employers.
Our results demonstrate that while fair ranking algorithms generally improve the selection rates of underrepresented minorities, their effectiveness relies heavily on the job contexts and candidate profiles.
arXiv Detail & Related papers (2020-12-01T11:45:27Z) - "What We Can't Measure, We Can't Understand": Challenges to Demographic
Data Procurement in the Pursuit of Fairness [0.0]
algorithmic fairness practitioners often do not have access to demographic data they feel they need to detect bias in practice.
We investigated this dilemma through semi-structured interviews with 38 practitioners and professionals either working in or adjacent to algorithmic fairness.
Participants painted a complex picture of what demographic data availability and use look like on the ground.
arXiv Detail & Related papers (2020-10-30T21:06:41Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.