DeformTune: A Deformable XAI Music Prototype for Non-Musicians
- URL: http://arxiv.org/abs/2508.00160v1
- Date: Thu, 31 Jul 2025 20:57:59 GMT
- Title: DeformTune: A Deformable XAI Music Prototype for Non-Musicians
- Authors: Ziqing Xu, Nick Bryan-Kinns,
- Abstract summary: This paper introduces DeformTune, a prototype system that combines a deformable interface with the MeasureVAE model to explore more intuitive, embodied, and explainable AI interaction.<n>We conducted a preliminary study with 11 adult participants without formal musical training to investigate their experience with AI-assisted music creation.<n>Thematic analysis of their feedback revealed recurring challenge--including unclear control mappings, limited expressive range, and the need for guidance throughout use.
- Score: 8.306938034148516
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many existing AI music generation tools rely on text prompts, complex interfaces, or instrument-like controls, which may require musical or technical knowledge that non-musicians do not possess. This paper introduces DeformTune, a prototype system that combines a tactile deformable interface with the MeasureVAE model to explore more intuitive, embodied, and explainable AI interaction. We conducted a preliminary study with 11 adult participants without formal musical training to investigate their experience with AI-assisted music creation. Thematic analysis of their feedback revealed recurring challenge--including unclear control mappings, limited expressive range, and the need for guidance throughout use. We discuss several design opportunities for enhancing explainability of AI, including multimodal feedback and progressive interaction support. These findings contribute early insights toward making AI music systems more explainable and empowering for novice users.
Related papers
- ReaLJam: Real-Time Human-AI Music Jamming with Reinforcement Learning-Tuned Transformers [53.63950017886757]
We introduce ReaLJam, an interface and protocol for live musical jamming sessions between a human and a Transformer-based AI agent trained with reinforcement learning.<n>We enable real-time interactions using the concept of anticipation, where the agent continually predicts how the performance will unfold and visually conveys its plan to the user.
arXiv Detail & Related papers (2025-02-28T17:42:58Z) - Tuning Music Education: AI-Powered Personalization in Learning Music [0.2046223849354785]
We present two case studies using such advances in music technology to address challenges in music education.<n>In our first case study we showcase an application that uses Automatic Chord Recognition to generate personalized exercises from audio tracks.<n>In the second case study we prototype adaptive piano method books that use Automatic Music Transcription to generate exercises at different skill levels.
arXiv Detail & Related papers (2024-12-18T05:25:42Z) - Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Foundation Models for Music: A Survey [77.77088584651268]
Foundations models (FMs) have profoundly impacted diverse sectors, including music.
This comprehensive review examines state-of-the-art (SOTA) pre-trained models and foundation models in music.
arXiv Detail & Related papers (2024-08-26T15:13:14Z) - Play Me Something Icy: Practical Challenges, Explainability and the Semantic Gap in Generative AI Music [0.0]
This pictorial aims to critically consider the nature of text-to-audio and text-to-music generative tools in the context of explainable AI.
arXiv Detail & Related papers (2024-08-13T22:42:05Z) - Towards Explainable and Interpretable Musical Difficulty Estimation: A Parameter-efficient Approach [49.2787113554916]
Estimating music piece difficulty is important for organizing educational music collections.
Our work employs explainable descriptors for difficulty estimation in symbolic music representations.
Our approach, evaluated in piano repertoire categorized in 9 classes, achieved 41.4% accuracy independently, with a mean squared error (MSE) of 1.7.
arXiv Detail & Related papers (2024-08-01T11:23:42Z) - MuseBarControl: Enhancing Fine-Grained Control in Symbolic Music Generation through Pre-Training and Counterfactual Loss [51.85076222868963]
We introduce a pre-training task designed to link control signals directly with corresponding musical tokens.
We then implement a novel counterfactual loss that promotes better alignment between the generated music and the control prompts.
arXiv Detail & Related papers (2024-07-05T08:08:22Z) - An Autoethnographic Exploration of XAI in Algorithmic Composition [7.775986202112564]
This paper introduces an autoethnographic study of the use of the MeasureVAE generative music XAI model with interpretable latent dimensions trained on Irish music.
Findings suggest that the exploratory nature of the music-making workflow foregrounds musical features of the training dataset rather than features of the generative model itself.
arXiv Detail & Related papers (2023-08-11T12:03:17Z) - Adoption of AI Technology in the Music Mixing Workflow: An Investigation [0.0]
The study investigates the current state of AI in the mixing music and its adoption by different user groups.
Our findings show that while AI mixing tools can simplify the process, pro-ams seek precise control and customization options.
The study provides strategies for designing effective AI mixing tools for different user groups and outlines future directions.
arXiv Detail & Related papers (2023-04-06T22:47:59Z) - AI Song Contest: Human-AI Co-Creation in Songwriting [8.399688944263843]
We present findings on what 13 musician/developer teams, a total of 61 users, needed when co-creating a song with AI.
We show how they leveraged and repurposed existing characteristics of AI to overcome some of these challenges.
Findings reflect a need to design machine learning-powered music interfaces that are more decomposable, steerable, interpretable, and adaptive.
arXiv Detail & Related papers (2020-10-12T01:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.