Auditing for Discrimination in Algorithms Delivering Job Ads
- URL: http://arxiv.org/abs/2104.04502v1
- Date: Fri, 9 Apr 2021 17:38:36 GMT
- Title: Auditing for Discrimination in Algorithms Delivering Job Ads
- Authors: Basileal Imana, Aleksandra Korolova, John Heidemann
- Abstract summary: We develop a new methodology for black-box auditing of algorithms for discrimination in the delivery of job advertisements.
Our first contribution is to identify the distinction between skew in ad delivery due to protected categories such as gender or race.
Second, we develop an auditing methodology that distinguishes between skew explainable by differences in qualifications from other factors.
Third, we apply our proposed methodology to two prominent targeted advertising platforms for job ads: Facebook and LinkedIn.
- Score: 70.02478301291264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ad platforms such as Facebook, Google and LinkedIn promise value for
advertisers through their targeted advertising. However, multiple studies have
shown that ad delivery on such platforms can be skewed by gender or race due to
hidden algorithmic optimization by the platforms, even when not requested by
the advertisers. Building on prior work measuring skew in ad delivery, we
develop a new methodology for black-box auditing of algorithms for
discrimination in the delivery of job advertisements. Our first contribution is
to identify the distinction between skew in ad delivery due to protected
categories such as gender or race, from skew due to differences in
qualification among people in the targeted audience. This distinction is
important in U.S. law, where ads may be targeted based on qualifications, but
not on protected categories. Second, we develop an auditing methodology that
distinguishes between skew explainable by differences in qualifications from
other factors, such as the ad platform's optimization for engagement or
training its algorithms on biased data. Our method controls for job
qualification by comparing ad delivery of two concurrent ads for similar jobs,
but for a pair of companies with different de facto gender distributions of
employees. We describe the careful statistical tests that establish evidence of
non-qualification skew in the results. Third, we apply our proposed methodology
to two prominent targeted advertising platforms for job ads: Facebook and
LinkedIn. We confirm skew by gender in ad delivery on Facebook, and show that
it cannot be justified by differences in qualifications. We fail to find skew
in ad delivery on LinkedIn. Finally, we suggest improvements to ad platform
practices that could make external auditing of their algorithms in the public
interest more feasible and accurate.
Related papers
- Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Discrimination through Image Selection by Job Advertisers on Facebook [79.21648699199648]
We propose and investigate the prevalence of a new means for discrimination in job advertising.
It combines both targeting and delivery -- through the disproportionate representation or exclusion of people of certain demographics in job ad images.
We use the Facebook Ad Library to demonstrate the prevalence of this practice.
arXiv Detail & Related papers (2023-06-13T03:43:58Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Choosing an algorithmic fairness metric for an online marketplace:
Detecting and quantifying algorithmic bias on LinkedIn [0.21756081703275995]
We derive an algorithmic fairness metric from the fairness notion of equal opportunity for equally qualified candidates.
We use the proposed method to measure and quantify algorithmic bias with respect to gender of two algorithms used by LinkedIn.
arXiv Detail & Related papers (2022-02-15T10:33:30Z) - Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay
of Human and Algorithmic Biases in Online Hiring [9.21721532941863]
We analyze various sources of gender biases in online hiring platforms, including the job context and inherent biases of the employers.
Our results demonstrate that while fair ranking algorithms generally improve the selection rates of underrepresented minorities, their effectiveness relies heavily on the job contexts and candidate profiles.
arXiv Detail & Related papers (2020-12-01T11:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.