What Makes a Fairness Tool Project Sustainable in Open Source?
- URL: http://arxiv.org/abs/2505.09802v1
- Date: Wed, 14 May 2025 20:58:26 GMT
- Title: What Makes a Fairness Tool Project Sustainable in Open Source?
- Authors: Sadia Afrin Mim, Fatemeh Vares, Andrew Meenly, Brittany Johnson,
- Abstract summary: Many fairness tools are publicly available for free use and adaptation.<n>Because fairness is an ongoing concern, these tools must be built for long-term sustainability.
- Score: 4.637328271312331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As society becomes increasingly reliant on artificial intelligence, the need to mitigate risk and harm is paramount. In response, researchers and practitioners have developed tools to detect and reduce undesired bias, commonly referred to as fairness tools. Many of these tools are publicly available for free use and adaptation. While the growing availability of such tools is promising, little is known about the broader landscape beyond well-known examples like AI Fairness 360 and Fairlearn. Because fairness is an ongoing concern, these tools must be built for long-term sustainability. Using an existing set of fairness tools as a reference, we systematically searched GitHub and identified 50 related projects. We then analyzed various aspects of their repositories to assess community engagement and the extent of ongoing maintenance. Our findings show diverse forms of engagement with these tools, suggesting strong support for open-source development. However, we also found significant variation in how well these tools are maintained. Notably, 53 percent of fairness projects become inactive within the first three years. By examining sustainability in fairness tooling, we aim to promote more stability and growth in this critical area.
Related papers
- From Expectation to Habit: Why Do Software Practitioners Adopt Fairness Toolkits? [11.05629708648904]
This study investigates the factors influencing the adoption of fairness toolkits from an individual perspective.<n>Our findings reveal that performance expectancy and habit are the primary drivers of fairness toolkit adoption.<n> Practical recommendations include improving toolkit usability, integrating bias mitigation processes into routine development, and providing ongoing support.
arXiv Detail & Related papers (2024-12-18T13:38:28Z) - From Literature to Practice: Exploring Fairness Testing Tools for the Software Industry Adoption [5.901307724130718]
In today's world, we need to ensure that AI systems are fair and unbiased.
Current fairness testing tools need significant improvements to better support software developers.
New tools should be user-friendly, well-documented, and flexible enough to handle different kinds of data.
arXiv Detail & Related papers (2024-09-04T04:23:08Z) - Tool Learning with Large Language Models: A Survey [60.733557487886635]
Tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.
Despite growing attention and rapid advancements in this field, the existing literature remains fragmented and lacks systematic organization.
arXiv Detail & Related papers (2024-05-28T08:01:26Z) - TOOLVERIFIER: Generalization to New Tools via Self-Verification [69.85190990517184]
We introduce a self-verification method which distinguishes between close candidates by self-asking contrastive questions during tool selection.
Experiments on 4 tasks from the ToolBench benchmark, consisting of 17 unseen tools, demonstrate an average improvement of 22% over few-shot baselines.
arXiv Detail & Related papers (2024-02-21T22:41:38Z) - Individual context-free online community health indicators fail to identify open source software sustainability [3.192308005611312]
We monitored thirty-eight open source projects over the period of a year.
None of the projects were abandoned during this period, and only one project entered a planned shutdown.
Results were highly heterogeneous, showing little commonality across documentation, mean response times for issues and code contributions, and available funding/staffing resources.
arXiv Detail & Related papers (2023-09-21T14:41:41Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - Tool Learning with Foundation Models [158.8640687353623]
With the advent of foundation models, AI systems have the potential to be equally adept in tool use as humans.
Despite its immense potential, there is still a lack of a comprehensive understanding of key challenges, opportunities, and future endeavors in this field.
arXiv Detail & Related papers (2023-04-17T15:16:10Z) - The Right Tool for the Job: Open-Source Auditing Tools in Machine
Learning [0.0]
In recent years, discussions about fairness in machine learning, AI ethics and algorithm audits have increased.
Many open-source auditing tools are available, but users aren't always aware of the tools, what they are useful for, or how to access them.
This paper aims to reinforce the urgent need to actually use these tools and provides motivations for doing so.
arXiv Detail & Related papers (2022-06-20T15:20:26Z) - Exploring How Machine Learning Practitioners (Try To) Use Fairness
Toolkits [35.7895677378462]
We investigate how industry practitioners (try to) work with existing fairness toolkits.
We identify several opportunities for fairness toolkits to better address practitioner needs.
We highlight implications for the design of future open-source fairness toolkits.
arXiv Detail & Related papers (2022-05-13T23:07:46Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z) - Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and
Communicating the Uncertainty of AI [49.64037266892634]
We describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
The goal of this toolkit is twofold: first, to provide a broad range of capabilities to streamline as well as foster the common practices of quantifying, evaluating, improving, and communicating uncertainty in the AI application development lifecycle; second, to encourage further exploration of UQ's connections to other pillars of trustworthy AI.
arXiv Detail & Related papers (2021-06-02T18:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.