Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk
Management
- URL: http://arxiv.org/abs/2110.09271v1
- Date: Tue, 21 Sep 2021 21:22:58 GMT
- Title: Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk
Management
- Authors: Ashwin, William Agnew, Juan Pajaro, Arjun Subramonian (Organizers of
Queer in AI)
- Abstract summary: Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place.
We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative design principles.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI, machine learning, and data science methods are already pervasive in our
society and technology, affecting all of our lives in many subtle ways.
Trustworthy AI has become an important topic because trust in AI systems and
their creators has been lost, or was never present in the first place.
Researchers, corporations, and governments have long and painful histories of
excluding marginalized groups from technology development, deployment, and
oversight. As a direct result of this exclusion, these technologies have long
histories of being less useful or even harmful to minoritized groups. This
infuriating history illustrates that industry cannot be trusted to
self-regulate and why trust in commercial AI systems and development has been
lost. We argue that any AI development, deployment, and monitoring framework
that aspires to trust must incorporate both feminist, non-exploitative
participatory design principles and strong, outside, and continual monitoring
and testing. We additionally explain the importance of considering aspects of
trustworthiness beyond just transparency, fairness, and accountability,
specifically, to consider justice and shifting power to the people and
disempowered as core values to any trustworthy AI system. Creating trustworthy
AI starts by funding, supporting, and empowering groups like Queer in AI so the
field of AI has the diversity and inclusion to credibly and effectively develop
trustworthy AI. Through our years of work and advocacy, we have developed
expert knowledge around questions of if and how gender, sexuality, and other
aspects of identity should be used in AI systems and how harms along these
lines should be mitigated. Based on this, we discuss a gendered approach to AI,
and further propose a queer epistemology and analyze the benefits it can bring
to AI.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Never trust, always verify : a roadmap for Trustworthy AI? [12.031113181911627]
We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
arXiv Detail & Related papers (2022-06-23T21:13:10Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The Sanction of Authority: Promoting Public Trust in AI [4.729969944853141]
We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society.
We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective.
arXiv Detail & Related papers (2021-01-22T22:01:30Z) - Socially Responsible AI Algorithms: Issues, Purposes, and Challenges [31.382000425295885]
Technologists and AI researchers have a responsibility to develop trustworthy AI systems.
To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness.
arXiv Detail & Related papers (2021-01-01T17:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.