It is Giving Major Satisfaction: Why Fairness Matters for Developers
- URL: http://arxiv.org/abs/2410.02482v1
- Date: Thu, 3 Oct 2024 13:40:00 GMT
- Title: It is Giving Major Satisfaction: Why Fairness Matters for Developers
- Authors: Emeralda Sesari, Federica Sarro, Ayushi Rastogi,
- Abstract summary: This study aims to examine how fairness perceptions relate to job satisfaction among software practitioners.
Our findings indicate that all four fairness dimensions, distributive, procedural, interpersonal, and informational, significantly affect job satisfaction.
The relationship between fairness perceptions and job satisfaction is notably stronger for female, ethnically underrepresented, less experienced practitioners, and those with work limitations.
- Score: 9.312605205492456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Software practitioners often face unfairness in their work, such as unequal recognition of contributions, gender bias, and unclear criteria for performance reviews. While the link between fairness and job satisfaction has been established in other fields, its relevance to software professionals remains underexplored. This study aims to examine how fairness perceptions relate to job satisfaction among software practitioners, focusing on both general trends and demographic-specific differences. We conducted an online survey of 108 software practitioners, followed by ordinal logistic regression to analyze the relationship between fairness perceptions and job satisfaction in software engineering contexts, with moderation analysis examining how this relationship varies across demographic groups. Our findings indicate that all four fairness dimensions, distributive, procedural, interpersonal, and informational, significantly affect both overall job satisfaction and satisfaction with job security. Among these, interpersonal fairness has the biggest impact, being more than twice as influential on overall job satisfaction. The relationship between fairness perceptions and job satisfaction is notably stronger for female, ethnically underrepresented, less experienced practitioners, and those with work limitations. Fairness in authorship emerged as an important factor for job satisfaction collectively, while fairness in policy implementation, high-demand situations, and working hours particularly impacted specific demographic groups. This study highlights the unique role of fairness in software engineering, offering strategies for organizations to promote fair practices and targeted approaches specific for certain demographic groups.
Related papers
- A Human-in-the-Loop Fairness-Aware Model Selection Framework for Complex Fairness Objective Landscapes [37.5215569371757]
ManyFairHPO is a fairness-aware model selection framework that enables practitioners to navigate complex and nuanced fairness objective landscapes.
We demonstrate the effectiveness of ManyFairHPO in balancing multiple fairness objectives, mitigating risks such as self-fulfilling prophecies, and providing interpretable insights to guide stakeholders in making fairness-aware modeling decisions.
arXiv Detail & Related papers (2024-10-17T07:32:24Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - The Role of Relevance in Fair Ranking [1.5469452301122177]
We argue that relevance scores should satisfy a set of desired criteria in order to guide fairness interventions.
We then empirically show that not all of these criteria are met in a case study of relevance inferred from biased user click data.
Our analyses and results surface the pressing need for new approaches to relevance collection and generation.
arXiv Detail & Related papers (2023-05-09T16:58:23Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Long-term dynamics of fairness: understanding the impact of data-driven
targeted help on job seekers [1.357291726431012]
We use an approach that combines statistics and machine learning to assess long-term fairness effects of labor market interventions.
We develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers.
We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.
arXiv Detail & Related papers (2022-08-17T12:03:23Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z) - Fair Machine Learning Under Partial Compliance [22.119168255562897]
We propose a simple model of an employment market, leveraging simulation as a tool to explore the impact of both interaction effects and incentive effects on outcomes and auditing metrics.
Our key findings are that at equilibrium, partial compliance (k% of employers) can result in far less than proportional (k%) progress towards the full compliance outcomes.
arXiv Detail & Related papers (2020-11-07T01:46:53Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.