Saving Face: Investigating the Ethical Concerns of Facial Recognition
Auditing
- URL: http://arxiv.org/abs/2001.00964v1
- Date: Fri, 3 Jan 2020 20:03:44 GMT
- Title: Saving Face: Investigating the Ethical Concerns of Facial Recognition
Auditing
- Authors: Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy
Buolamwini, Joonseok Lee, Emily Denton
- Abstract summary: algorithmic auditing can have effects that may harm the very populations these measures are meant to protect.
We demonstrate a set of five ethical concerns in the particular case of auditing commercial facial processing technology.
We go further to provide tangible illustrations of these concerns, and conclude by reflecting on what these concerns mean for the role of the algorithmic audit and the fundamental product limitations they reveal.
- Score: 17.42753238926119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although essential to revealing biased performance, well intentioned attempts
at algorithmic auditing can have effects that may harm the very populations
these measures are meant to protect. This concern is even more salient while
auditing biometric systems such as facial recognition, where the data is
sensitive and the technology is often used in ethically questionable manners.
We demonstrate a set of five ethical concerns in the particular case of
auditing commercial facial processing technology, highlighting additional
design considerations and ethical tensions the auditor needs to be aware of so
as not exacerbate or complement the harms propagated by the audited system. We
go further to provide tangible illustrations of these concerns, and conclude by
reflecting on what these concerns mean for the role of the algorithmic audit
and the fundamental product limitations they reveal.
Related papers
- From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv Detail & Related papers (2024-10-07T06:15:46Z) - A Game-Theoretic Analysis of Auditing Differentially Private Algorithms with Epistemically Disparate Herd [16.10098472773814]
This study examines the impact of herd audits on algorithm developers using the Stackelberg game approach.
By enhancing transparency and accountability, herd audit contributes to the responsible development of privacy-preserving algorithms.
arXiv Detail & Related papers (2024-04-24T20:34:27Z) - Who Audits the Auditors? Recommendations from a field scan of the
algorithmic auditing ecosystem [0.971392598996499]
We provide the first comprehensive field scan of the AI audit ecosystem.
We identify emerging best practices as well as methods and tools that are becoming commonplace.
We outline policy recommendations to improve the quality and impact of these audits.
arXiv Detail & Related papers (2023-10-04T01:40:03Z) - The right to audit and power asymmetries in algorithm auditing [68.8204255655161]
We elaborate on the challenges and asymmetries mentioned by Sandvig at the IC2S2 2021.
We also contribute a discussion of the asymmetries that were not covered by Sandvig.
We discuss the implications these asymmetries have for algorithm auditing research.
arXiv Detail & Related papers (2023-02-16T13:57:41Z) - Justice in Misinformation Detection Systems: An Analysis of Algorithms,
Stakeholders, and Potential Harms [2.5372245630249632]
We show how injustices materialize for stakeholders across three algorithmic stages in the misinformation detection pipeline.
This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with algorithmic misinformation detection.
arXiv Detail & Related papers (2022-04-28T15:31:13Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Fairness for Unobserved Characteristics: Insights from Technological
Impacts on Queer Communities [7.485814345656486]
Sexual orientation and gender identity are prototypical instances of unobserved characteristics.
New approaches for algorithmic fairness break away from the prevailing assumption of observed characteristics.
arXiv Detail & Related papers (2021-02-03T18:52:54Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition [133.35968094967626]
Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances.
With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment.
Research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant.
arXiv Detail & Related papers (2020-05-14T17:12:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.