Building Bridges: Generative Artworks to Explore AI Ethics
- URL: http://arxiv.org/abs/2106.13901v1
- Date: Fri, 25 Jun 2021 22:31:55 GMT
- Title: Building Bridges: Generative Artworks to Explore AI Ethics
- Authors: Ramya Srinivasan and Devi Parikh
- Abstract summary: In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
- Score: 56.058588908294446
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, there has been an increased emphasis on understanding and
mitigating adverse impacts of artificial intelligence (AI) technologies on
society. Across academia, industry, and government bodies, a variety of
endeavours are being pursued towards enhancing AI ethics. A significant
challenge in the design of ethical AI systems is that there are multiple
stakeholders in the AI pipeline, each with their own set of constraints and
interests. These different perspectives are often not understood, due in part
to communication gaps.For example, AI researchers who design and develop AI
models are not necessarily aware of the instability induced in consumers' lives
by the compounded effects of AI decisions. Educating different stakeholders
about their roles and responsibilities in the broader context becomes
necessary. In this position paper, we outline some potential ways in which
generative artworks can play this role by serving as accessible and powerful
educational tools for surfacing different perspectives. We hope to spark
interdisciplinary discussions about computational creativity broadly as a tool
for enhancing AI ethics.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - AI Thinking: A framework for rethinking artificial intelligence in practice [2.9805831933488127]
A growing range of disciplines are now involved in studying, developing, and assessing the use of AI in practice.
New, interdisciplinary approaches are needed to bridge competing conceptualisations of AI in practice.
I propose a novel conceptual framework called AI Thinking, which models key decisions and considerations involved in AI use across disciplinary perspectives.
arXiv Detail & Related papers (2024-08-26T04:41:21Z) - Visions of a Discipline: Analyzing Introductory AI Courses on YouTube [11.209406323898019]
We analyze the 20 most watched introductory AI courses on YouTube.
Introductory AI courses do not meaningfully engage with ethical or societal challenges of AI.
We recommend that introductory AI courses should highlight ethical challenges of AI to present a more balanced perspective.
arXiv Detail & Related papers (2024-05-31T01:48:42Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - The Ethics of AI in Education [0.0]
The transition of Artificial Intelligence from a lab-based science to live human contexts brings into sharp focus many historic, socio-cultural biases, inequalities, and moral dilemmas.
Questions that have been raised regarding the broader ethics of AI are also relevant for AI in Education (AIED)
AIED raises further challenges related to the impact of its technologies on users, how such technologies might be used to reinforce or alter the way that we learn and teach, and what we, as a society and individuals, value as outcomes of education.
arXiv Detail & Related papers (2024-03-22T11:41:37Z) - Artificial intelligence and the transformation of higher education
institutions [0.0]
This article develops a causal loop diagram (CLD) to map the causal feedback mechanisms of AI transformation in a typical HEI.
Our model accounts for the forces that drive the AI transformation and the consequences of the AI transformation on value creation in a typical HEI.
The article identifies and analyzes several reinforcing and balancing feedback loops, showing how the HEI invests in AI to improve student learning, research, and administration.
arXiv Detail & Related papers (2024-02-13T00:36:10Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.