Testing software for non-discrimination: an updated and extended audit in the Italian car insurance domain
- URL: http://arxiv.org/abs/2502.06439v1
- Date: Mon, 10 Feb 2025 13:16:01 GMT
- Title: Testing software for non-discrimination: an updated and extended audit in the Italian car insurance domain
- Authors: Marco Rondina, Antonio VetrĂ², Riccardo Coppola, Oumaima Regragrui, Alessandro Fabris, Gianmaria Silvello, Gian Antonio Susto, Juan Carlos De Martin,
- Abstract summary: Fairness in pricing algorithms grants equitable access to basic services without discriminating on the basis of protected attributes.
We replicate a previous empirical study that used black box testing to audit pricing algorithms used by Italian car insurance companies.
- Score: 40.846175330181005
- License:
- Abstract: Context. As software systems become more integrated into society's infrastructure, the responsibility of software professionals to ensure compliance with various non-functional requirements increases. These requirements include security, safety, privacy, and, increasingly, non-discrimination. Motivation. Fairness in pricing algorithms grants equitable access to basic services without discriminating on the basis of protected attributes. Method. We replicate a previous empirical study that used black box testing to audit pricing algorithms used by Italian car insurance companies, accessible through a popular online system. With respect to the previous study, we enlarged the number of tests and the number of demographic variables under analysis. Results. Our work confirms and extends previous findings, highlighting the problematic permanence of discrimination across time: demographic variables significantly impact pricing to this day, with birthplace remaining the main discriminatory factor against individuals not born in Italian cities. We also found that driver profiles can determine the number of quotes available to the user, denying equal opportunities to all. Conclusion. The study underscores the importance of testing for non-discrimination in software systems that affect people's everyday lives. Performing algorithmic audits over time makes it possible to evaluate the evolution of such algorithms. It also demonstrates the role that empirical software engineering can play in making software systems more accountable.
Related papers
- Exposing Algorithmic Discrimination and Its Consequences in Modern
Society: Insights from a Scoping Study [0.0]
This study delves into various studies published over the years reporting algorithmic discrimination.
We aim to support software engineering researchers and practitioners in addressing this issue.
arXiv Detail & Related papers (2023-12-08T05:09:00Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Discrimination in machine learning algorithms [0.0]
Machine learning algorithms are routinely used for business decisions that may directly affect individuals, for example, because a credit scoring algorithm refuses them a loan.
It is then relevant from an ethical (and legal) point of view to ensure that these algorithms do not discriminate based on sensitive attributes (like sex or race), which may occur unwittingly and unknowingly by the operator and the management.
arXiv Detail & Related papers (2022-06-30T21:35:42Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Evaluating Proposed Fairness Models for Face Recognition Algorithms [0.0]
This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe.
We propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure.
We believe this is currently the largest open-source dataset of its kind.
arXiv Detail & Related papers (2022-03-09T21:16:43Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - A Normative approach to Attest Digital Discrimination [6.372554934045607]
Examples include low-income neighbourhood's targeted with high-interest loans or low credit scores, and women being undervalued by 21% in online marketing.
We use norms as an abstraction to represent different situations that may lead to digital discrimination.
In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms.
arXiv Detail & Related papers (2020-07-14T15:14:52Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - DeBayes: a Bayesian Method for Debiasing Network Embeddings [16.588468396705366]
We propose DeBayes: a conceptually elegant Bayesian method that is capable of learning debiased embeddings by using a biased prior.
Our experiments show that these representations can then be used to perform link prediction that is significantly more fair in terms of popular metrics.
arXiv Detail & Related papers (2020-02-26T12:57:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.