Towards Integrating Fairness Transparently in Industrial Applications
- URL: http://arxiv.org/abs/2006.06082v3
- Date: Sat, 13 Feb 2021 23:23:08 GMT
- Title: Towards Integrating Fairness Transparently in Industrial Applications
- Authors: Emily Dodwell, Cheryl Flynn, Balachander Krishnamurthy, Subhabrata
Majumdar, Ritwik Mitra
- Abstract summary: We propose a systematic approach to integrate mechanized and human-in-the-loop components in bias detection, mitigation, and documentation of Machine Learning projects.
We present our structural primitives with an example real-world use case on how it can be used to identify potential biases and determine appropriate mitigation strategies.
- Score: 3.478469381434812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous Machine Learning (ML) bias-related failures in recent years have led
to scrutiny of how companies incorporate aspects of transparency and
accountability in their ML lifecycles. Companies have a responsibility to
monitor ML processes for bias and mitigate any bias detected, ensure business
product integrity, preserve customer loyalty, and protect brand image.
Challenges specific to industry ML projects can be broadly categorized into
principled documentation, human oversight, and need for mechanisms that enable
information reuse and improve cost efficiency. We highlight specific roadblocks
and propose conceptual solutions on a per-category basis for ML practitioners
and organizational subject matter experts. Our systematic approach tackles
these challenges by integrating mechanized and human-in-the-loop components in
bias detection, mitigation, and documentation of projects at various stages of
the ML lifecycle. To motivate the implementation of our system -- SIFT (System
to Integrate Fairness Transparently) -- we present its structural primitives
with an example real-world use case on how it can be used to identify potential
biases and determine appropriate mitigation strategies in a participatory
manner.
Related papers
- Towards Trustworthy Machine Learning in Production: An Overview of the Robustness in MLOps Approach [0.0]
In recent years, AI researchers and practitioners have introduced principles and guidelines to build systems that make reliable and trustworthy decisions.
In practice, a fundamental challenge arises when the system needs to be operationalized and deployed to evolve and operate in real-life environments continuously.
To address this challenge, Machine Learning Operations (MLOps) have emerged as a potential recipe for standardizing ML solutions in deployment.
arXiv Detail & Related papers (2024-10-28T09:34:08Z) - SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - Unbiasing on the Fly: Explanation-Guided Human Oversight of Machine Learning System Decisions [4.24106429730184]
We propose a novel framework for on-the-fly tracking and correction of discrimination in deployed ML systems.
The framework continuously monitors the predictions made by an ML system and flags discriminatory outcomes.
This human-in-the-loop approach empowers reviewers to accept or override the ML system decision.
arXiv Detail & Related papers (2024-06-25T19:40:55Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Identifying Concerns When Specifying Machine Learning-Enabled Systems: A
Perspective-Based Approach [1.2184324428571227]
PerSpecML is a perspective-based approach for specifying ML-enabled systems.
It helps practitioners identify which attributes, including ML and non-ML components, are important to contribute to the overall system's quality.
arXiv Detail & Related papers (2023-09-14T18:31:16Z) - Can Fairness be Automated? Guidelines and Opportunities for
Fairness-aware AutoML [52.86328317233883]
We present a comprehensive overview of different ways in which fairness-related harm can arise.
We highlight several open technical challenges for future work in this direction.
arXiv Detail & Related papers (2023-03-15T09:40:08Z) - Uncertainty-aware predictive modeling for fair data-driven decisions [5.371337604556311]
We show how fairML systems can be safeML systems.
For fair decisions, we argue that a safe fail option should be used for individuals with uncertain categorization.
arXiv Detail & Related papers (2022-11-04T20:04:39Z) - MMRNet: Improving Reliability for Multimodal Object Detection and
Segmentation for Bin Picking via Multimodal Redundancy [68.7563053122698]
We propose a reliable object detection and segmentation system with MultiModal Redundancy (MMRNet)
This is the first system that introduces the concept of multimodal redundancy to address sensor failure issues during deployment.
We present a new label-free multi-modal consistency (MC) score that utilizes the output from all modalities to measure the overall system output reliability and uncertainty.
arXiv Detail & Related papers (2022-10-19T19:15:07Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - LiFT: A Scalable Framework for Measuring Fairness in ML Applications [18.54302159142362]
We present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems.
We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn.
arXiv Detail & Related papers (2020-08-14T03:55:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.