Proposing an Interactive Audit Pipeline for Visual Privacy Research
- URL: http://arxiv.org/abs/2111.03984v1
- Date: Sun, 7 Nov 2021 01:51:43 GMT
- Title: Proposing an Interactive Audit Pipeline for Visual Privacy Research
- Authors: Jasmine DeHart, Chenguang Xu, Lisa Egede, Christan Grant
- Abstract summary: We argue for the use of fairness to discover bias and fairness issues in systems, assert the need for a responsible human-over-the-loop, and reflect on the need to explore research agendas that have harmful societal impacts.
Our goal is to provide a systematic analysis of the machine learning pipeline for visual privacy and bias issues.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In an ideal world, deployed machine learning models will enhance our society.
We hope that those models will provide unbiased and ethical decisions that will
benefit everyone. However, this is not always the case; issues arise from the
data curation process to the models' deployment. The continued use of biased
datasets and processes will adversely damage communities and increase the cost
to fix the problem. In this work, we walk through the decision process that a
researcher will need to make before, during, and after their project to
consider the broader impacts of research and the community. Throughout this
paper, we observe the critical decisions that are often overlooked when
deploying AI, argue for the use of fairness forensics to discover bias and
fairness issues in systems, assert the need for a responsible
human-over-the-loop to bring accountability into the deployed system, and
finally, reflect on the need to explore research agendas that have harmful
societal impacts. We examine visual privacy research and draw lessons that can
apply broadly to Artificial Intelligence. Our goal is to provide a systematic
analysis of the machine learning pipeline for visual privacy and bias issues.
With this pipeline, we hope to raise stakeholder (e.g., researchers, modelers,
corporations) awareness as these issues propagate in the various machine
learning phases.
Related papers
- Ethical Challenges in Computer Vision: Ensuring Privacy and Mitigating Bias in Publicly Available Datasets [0.0]
This paper aims to shed light on the ethical problems of creating and deploying computer vision tech.
Computer vision has become a vital tool in many industries, including medical care, security systems, and trade.
arXiv Detail & Related papers (2024-08-31T00:59:29Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - The Pursuit of Fairness in Artificial Intelligence Models: A Survey [2.124791625488617]
This survey offers a synopsis of the different ways researchers have promoted fairness in AI systems.
A thorough study is conducted of the approaches and techniques employed by researchers to mitigate bias in AI models.
We also delve into the impact of biased models on user experience and the ethical considerations to contemplate when developing and deploying such models.
arXiv Detail & Related papers (2024-03-26T02:33:36Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - A Framework for Fairness: A Systematic Review of Existing Fair AI
Solutions [4.594159253008448]
A large portion of fairness research has gone to producing tools that machine learning practitioners can use to audit for bias while designing their algorithms.
There is a lack of application of these fairness solutions in practice.
This review provides an in-depth summary of the algorithmic bias issues that have been defined and the fairness solution space that has been proposed.
arXiv Detail & Related papers (2021-12-10T17:51:20Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z) - Social Responsibility of Algorithms [0.0]
The paper makes a short overview of the scientific investigation around this topic, showing that the development, existence and use of such autonomous artifacts is much older than the recent interest in machine learning monopolised artificial intelligence.
arXiv Detail & Related papers (2020-12-06T16:46:14Z) - Bias in Data-driven AI Systems -- An Introductory Survey [37.34717604783343]
This survey focuses on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
arXiv Detail & Related papers (2020-01-14T09:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.