Play Me Something Icy: Practical Challenges, Explainability and the Semantic Gap in Generative AI Music
- URL: http://arxiv.org/abs/2408.07224v1
- Date: Tue, 13 Aug 2024 22:42:05 GMT
- Title: Play Me Something Icy: Practical Challenges, Explainability and the Semantic Gap in Generative AI Music
- Authors: Jesse Allison, Drew Farrar, Treya Nash, Carlos Román, Morgan Weeks, Fiona Xue Ju,
- Abstract summary: This pictorial aims to critically consider the nature of text-to-audio and text-to-music generative tools in the context of explainable AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This pictorial aims to critically consider the nature of text-to-audio and text-to-music generative tools in the context of explainable AI. As a group of experimental musicians and researchers, we are enthusiastic about the creative potential of these tools and have sought to understand and evaluate them from perspectives of prompt creation, control, usability, understandability, explainability of the AI process, and overall aesthetic effectiveness of the results. One of the challenges we have identified that is not explicitly addressed by these tools is the inherent semantic gap in using text-based tools to describe something as abstract as music. Other gaps include explainability vs. useability, and user control and input vs. the human creative process. The aim of this pictorial is to raise questions for discussion and make a few general suggestions on the kinds of improvements we would like to see in generative AI music tools.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Exploring XAI for the Arts: Explaining Latent Space in Generative Music [5.91328657300926]
We show how a latent variable model for music generation can be made more explainable.
We use latent space regularisation to force some specific dimensions of the latent space to map to meaningful musical attributes.
We also provide a visualisation of the musical attributes in the latent space to help people understand and predict the effect of changes to latent space dimensions.
arXiv Detail & Related papers (2023-08-10T10:59:24Z) - The Future of AI-Assisted Writing [0.0]
We conduct a comparative user-study between such tools from an information retrieval lens: pull and push.
Our findings show that users welcome seamless assistance of AI in their writing.
Users also enjoyed the collaboration with AI-assisted writing tools and did not feel a lack of ownership.
arXiv Detail & Related papers (2023-06-29T02:46:45Z) - Art-ificial Intelligence: The Effect of AI Disclosure on Evaluations of Creative Content [0.0]
We show that disclosure regarding the use of AI in the creation of creative content affects human evaluation of such content.
We interpret this result to suggest that reactions to AI-generated content may be negative when the content is viewed as distinctly "human"
arXiv Detail & Related papers (2023-03-11T13:54:17Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Redefining Relationships in Music [55.478320310047785]
We argue that AI tools will fundamentally reshape our music culture.
People working in this space could decrease the possible negative impacts on the practice, consumption and meaning of music.
arXiv Detail & Related papers (2022-12-13T19:44:32Z) - A Survey on Artificial Intelligence for Music Generation: Agents,
Domains and Perspectives [10.349825060515181]
We describe how humans compose music and how new AI systems could imitate such process.
To understand how AI models and algorithms generate music, we explore, analyze and describe the agents that take part of the music generation process.
arXiv Detail & Related papers (2022-10-25T11:54:30Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.