An Autoethnographic Exploration of XAI in Algorithmic Composition
- URL: http://arxiv.org/abs/2308.06089v1
- Date: Fri, 11 Aug 2023 12:03:17 GMT
- Title: An Autoethnographic Exploration of XAI in Algorithmic Composition
- Authors: Ashley Noel-Hirst and Nick Bryan-Kinns
- Abstract summary: This paper introduces an autoethnographic study of the use of the MeasureVAE generative music XAI model with interpretable latent dimensions trained on Irish music.
Findings suggest that the exploratory nature of the music-making workflow foregrounds musical features of the training dataset rather than features of the generative model itself.
- Score: 7.775986202112564
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning models are capable of generating complex music across a
range of genres from folk to classical music. However, current generative music
AI models are typically difficult to understand and control in meaningful ways.
Whilst research has started to explore how explainable AI (XAI) generative
models might be created for music, no generative XAI models have been studied
in music making practice. This paper introduces an autoethnographic study of
the use of the MeasureVAE generative music XAI model with interpretable latent
dimensions trained on Irish folk music. Findings suggest that the exploratory
nature of the music-making workflow foregrounds musical features of the
training dataset rather than features of the generative model itself. The
appropriation of an XAI model within an iterative workflow highlights the
potential of XAI models to form part of a richer and more complex workflow than
they were initially designed for.
Related papers
- Prevailing Research Areas for Music AI in the Era of Foundation Models [8.067636023395236]
There has been a surge of generative music AI applications within the past few years.
We discuss the current state of music datasets and their limitations.
We highlight applications of these generative models towards extensions to multiple modalities and integration with artists' workflow.
arXiv Detail & Related papers (2024-09-14T09:06:43Z) - Foundation Models for Music: A Survey [77.77088584651268]
Foundations models (FMs) have profoundly impacted diverse sectors, including music.
This comprehensive review examines state-of-the-art (SOTA) pre-trained models and foundation models in music.
arXiv Detail & Related papers (2024-08-26T15:13:14Z) - Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations [52.11801730860999]
In recent years, the robot learning community has shown increasing interest in using deep generative models to capture the complexity of large datasets.
We present the different types of models that the community has explored, such as energy-based models, diffusion models, action value maps, or generative adversarial networks.
We also present the different types of applications in which deep generative models have been used, from grasp generation to trajectory generation or cost learning.
arXiv Detail & Related papers (2024-08-08T11:34:31Z) - Reducing Barriers to the Use of Marginalised Music Genres in AI [7.140590440016289]
This project aims to explore the eXplainable AI (XAI) challenges and opportunities associated with reducing barriers to using marginalised genres of music with AI models.
XAI opportunities identified included topics of improving transparency and control of AI models, explaining the ethics and bias of AI models, fine tuning large models with small datasets to reduce bias, and explaining style-transfer opportunities with AI models.
We are now building on this project to bring together a global International Responsible AI Music community and invite people to join our network.
arXiv Detail & Related papers (2024-07-18T12:10:04Z) - A Survey of Music Generation in the Context of Interaction [3.6522809408725223]
Machine learning has been successfully used to compose and generate music, both melodies and polyphonic pieces.
Most of these models are not suitable for human-machine co-creation through live interaction.
arXiv Detail & Related papers (2024-02-23T12:41:44Z) - StemGen: A music generation model that listens [9.489938613869864]
We present an alternative paradigm for producing music generation models that can listen and respond to musical context.
We describe how such a model can be constructed using a non-autoregressive, transformer-based model architecture.
The resulting model reaches the audio quality of state-of-the-art text-conditioned models, as well as exhibiting strong musical coherence with its context.
arXiv Detail & Related papers (2023-12-14T08:09:20Z) - Exploring Variational Auto-Encoder Architectures, Configurations, and
Datasets for Generative Music Explainable AI [7.391173255888337]
Generative AI models for music and the arts are increasingly complex and hard to understand.
One approach to making generative AI models more understandable is to impose a small number of semantically meaningful attributes on generative AI models.
This paper contributes a systematic examination of the impact that different combinations of Variational Auto-Encoder models (MeasureVAE and AdversarialVAE) have on music generation performance.
arXiv Detail & Related papers (2023-11-14T17:27:30Z) - Simple and Controllable Music Generation [94.61958781346176]
MusicGen is a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens.
Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns.
arXiv Detail & Related papers (2023-06-08T15:31:05Z) - Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music
Generation Task [86.72661027591394]
We generate complete and semantically consistent symbolic music scores from text descriptions.
We explore the efficacy of using publicly available checkpoints for natural language processing in the task of text-to-music generation.
Our experimental results show that the improvement from using pre-trained checkpoints is statistically significant in terms of BLEU score and edit distance similarity.
arXiv Detail & Related papers (2022-11-21T07:19:17Z) - Incorporating Music Knowledge in Continual Dataset Augmentation for
Music Generation [69.06413031969674]
Aug-Gen is a method of dataset augmentation for any music generation system trained on a resource-constrained domain.
We apply Aug-Gen to Transformer-based chorale generation in the style of J.S. Bach, and show that this allows for longer training and results in better generative output.
arXiv Detail & Related papers (2020-06-23T21:06:15Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.