Agency and legibility for artists through Experiential AI
- URL: http://arxiv.org/abs/2306.02327v1
- Date: Sun, 4 Jun 2023 11:00:07 GMT
- Title: Agency and legibility for artists through Experiential AI
- Authors: Drew Hemment, Matjaz Vidmar, Daga Panas, Dave Murray-Rust, Vaishak
Belle and Aylett Ruth
- Abstract summary: Experiential AI is an emerging research field that addresses the challenge of making AI tangible and explicit.
We report on an empirical case study of an experiential AI system designed for creative data exploration.
We discuss how experiential AI can increase legibility and agency for artists.
- Score: 12.941266914933454
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Experiential AI is an emerging research field that addresses the challenge of
making AI tangible and explicit, both to fuel cultural experiences for
audiences, and to make AI systems more accessible to human understanding. The
central theme is how artists, scientists and other interdisciplinary actors can
come together to understand and communicate the functionality of AI, ML and
intelligent robots, their limitations, and consequences, through informative
and compelling experiences. It provides an approach and methodology for the
arts and tangible experiences to mediate between impenetrable computer code and
human understanding, making not just AI systems but also their values and
implications more transparent, and therefore accountable. In this paper, we
report on an empirical case study of an experiential AI system designed for
creative data exploration of a user-defined dimension, to enable creators to
gain more creative control over the AI process. We discuss how experiential AI
can increase legibility and agency for artists, and how the arts can provide
creative strategies and methods which can add to the toolbox for human-centred
XAI.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Untangling Critical Interaction with AI in Students Written Assessment [2.8078480738404]
Key challenge exists in ensuring that humans are equipped with the required critical thinking and AI literacy skills.
This paper provides a first step toward conceptualizing the notion of critical learner interaction with AI.
Using both theoretical models and empirical data, our preliminary findings suggest a general lack of Deep interaction with AI during the writing process.
arXiv Detail & Related papers (2024-04-10T12:12:50Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - Grasping AI: experiential exercises for designers [8.95562850825636]
We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems.
We find that exercises around metaphors and enactments make questions of training and learning, privacy and consent, autonomy and agency more tangible.
arXiv Detail & Related papers (2023-10-02T15:34:08Z) - Experiential AI: A transdisciplinary framework for legibility and agency
in AI [13.397979132753138]
Experiential AI is a research agenda in which scientists and artists come together to investigate the entanglements between humans and machines.
The paper discusses advances and limitations in the field of explainable AI.
arXiv Detail & Related papers (2023-06-01T12:59:06Z) - AI and the creative realm: A short review of current and future
applications [2.1320960069210484]
This study explores the concept of creativity and artificial intelligence (AI)
The development of more sophisticated AI models and the proliferation of human-computer interaction tools have opened up new possibilities for AI in artistic creation.
arXiv Detail & Related papers (2023-06-01T12:28:08Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.