The Right Tool for the Job: Open-Source Auditing Tools in Machine
Learning
- URL: http://arxiv.org/abs/2206.10613v1
- Date: Mon, 20 Jun 2022 15:20:26 GMT
- Title: The Right Tool for the Job: Open-Source Auditing Tools in Machine
Learning
- Authors: Cherie M Poland
- Abstract summary: In recent years, discussions about fairness in machine learning, AI ethics and algorithm audits have increased.
Many open-source auditing tools are available, but users aren't always aware of the tools, what they are useful for, or how to access them.
This paper aims to reinforce the urgent need to actually use these tools and provides motivations for doing so.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, discussions about fairness in machine learning, AI ethics
and algorithm audits have increased. Many entities have developed framework
guidance to establish a baseline rubric for fairness and accountability.
However, in spite of increased discussions and multiple frameworks, algorithm
and data auditing still remain difficult to execute in practice. Many
open-source auditing tools are available, but users aren't always aware of the
tools, what they are useful for, or how to access them. Model auditing and
evaluation are not frequently emphasized skills in machine learning. There are
also legal reasons for the proactive adoption of these tools that extend beyond
the desire for greater fairness in machine learning. There are positive social
issues of public perception and goodwill that matter in our highly connected
global society. Greater awareness of these tools and the reasons for actively
utilizing them may be helpful to the entire continuum of programmers, data
scientists, engineers, researchers, users and consumers of AI and machine
learning products. It is important for everyone to better understand the input
and output differentials, how they are occurring, and what can be done to
promote FATE (fairness, accountability, transparency, and ethics) in machine-
and deep learning. The ability to freely access open-source auditing tools
removes barriers to fairness assessment at the most basic levels of machine
learning. This paper aims to reinforce the urgent need to actually use these
tools and provides motivations for doing so. The exemplary tools highlighted
herein are open-source with software or code-base repositories available that
can be used immediately by anyone worldwide.
Related papers
- From Literature to Practice: Exploring Fairness Testing Tools for the Software Industry Adoption [5.901307724130718]
In today's world, we need to ensure that AI systems are fair and unbiased.
Current fairness testing tools need significant improvements to better support software developers.
New tools should be user-friendly, well-documented, and flexible enough to handle different kinds of data.
arXiv Detail & Related papers (2024-09-04T04:23:08Z) - The Impact of Generative AI-Powered Code Generation Tools on Software Engineer Hiring: Recruiters' Experiences, Perceptions, and Strategies [4.557635080377692]
This study explores recruiters' experiences and perceptions regarding GenAI-powered code generation tools.
Findings from our survey of 32 industry professionals indicate that although most participants are familiar with such tools, the majority of organizations have not adjusted their candidate evaluation methods to account for candidates' use/knowledge of these tools.
Most participants believe that it is important to incorporate GenAI-powered code generation tools into computer science curricula.
arXiv Detail & Related papers (2024-09-02T00:00:29Z) - Learning to Ask: When LLMs Meet Unclear Instruction [49.256630152684764]
Large language models (LLMs) can leverage external tools for addressing a range of tasks unattainable through language skills alone.
We evaluate the performance of LLMs tool-use under imperfect instructions, analyze the error patterns, and build a challenging tool-use benchmark called Noisy ToolBench.
We propose a novel framework, Ask-when-Needed (AwN), which prompts LLMs to ask questions to users whenever they encounter obstacles due to unclear instructions.
arXiv Detail & Related papers (2024-08-31T23:06:12Z) - Making Language Models Better Tool Learners with Execution Feedback [36.30542737293863]
Tools serve as pivotal interfaces that enable humans to understand and reshape the environment.
Existing tool learning methodologies induce large language models to utilize tools indiscriminately.
We propose Tool leaRning wIth exeCution fEedback (TRICE), a two-stage end-to-end framework that enables the model to continually learn through feedback derived from tool execution.
arXiv Detail & Related papers (2023-05-22T14:37:05Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - Tool Learning with Foundation Models [158.8640687353623]
With the advent of foundation models, AI systems have the potential to be equally adept in tool use as humans.
Despite its immense potential, there is still a lack of a comprehensive understanding of key challenges, opportunities, and future endeavors in this field.
arXiv Detail & Related papers (2023-04-17T15:16:10Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Exploring How Machine Learning Practitioners (Try To) Use Fairness
Toolkits [35.7895677378462]
We investigate how industry practitioners (try to) work with existing fairness toolkits.
We identify several opportunities for fairness toolkits to better address practitioner needs.
We highlight implications for the design of future open-source fairness toolkits.
arXiv Detail & Related papers (2022-05-13T23:07:46Z) - Flashlight: Enabling Innovation in Tools for Machine Learning [50.63188263773778]
We introduce Flashlight, an open-source library built to spur innovation in machine learning tools and systems.
We see Flashlight as a tool enabling research that can benefit widely used libraries downstream and bring machine learning and systems researchers closer together.
arXiv Detail & Related papers (2022-01-29T01:03:29Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z) - Intuitiveness in Active Teaching [7.8029610421817654]
We analyze intuitiveness of certain algorithms when they are actively taught by users.
We offer a systematic method to judge the efficacy of human-machine interactions.
arXiv Detail & Related papers (2020-12-25T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.