Interactive Machine Learning of Musical Gesture
- URL: http://arxiv.org/abs/2011.13487v1
- Date: Thu, 26 Nov 2020 22:44:54 GMT
- Title: Interactive Machine Learning of Musical Gesture
- Authors: Federico Ghelli Visi and Atau Tanaka
- Abstract summary: This chapter presents an overview of Interactive Machine Learning (IML) techniques applied to the analysis and design of musical gestures.
We discuss how different algorithms may be used to accomplish different tasks, including interacting with complex synthesis techniques.
We conclude the chapter with a description of how some of these techniques were employed by the authors for the development of four musical pieces, thus outlining the implications that IML have for musical practice.
- Score: 1.370633147306388
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This chapter presents an overview of Interactive Machine Learning (IML)
techniques applied to the analysis and design of musical gestures. We go
through the main challenges and needs related to capturing, analysing, and
applying IML techniques to human bodily gestures with the purpose of performing
with sound synthesis systems. We discuss how different algorithms may be used
to accomplish different tasks, including interacting with complex synthesis
techniques and exploring interaction possibilities by means of Reinforcement
Learning (RL) in an interaction paradigm we developed called Assisted
Interactive Machine Learning (AIML). We conclude the chapter with a description
of how some of these techniques were employed by the authors for the
development of four musical pieces, thus outlining the implications that IML
have for musical practice.
Related papers
- Embodied Exploration of Latent Spaces and Explainable AI [0.0]
In this paper, we explore how performers' embodied interactions with a Neural Audio Synthesis model allow the exploration of the latent space of such a model.
We provide background and context for the performance, highlighting the potential of embodied practices to contribute to developing explainable AI systems.
arXiv Detail & Related papers (2024-10-18T16:40:34Z) - On the Interaction between Software Engineers and Data Scientists when
building Machine Learning-Enabled Systems [1.2184324428571227]
Machine Learning (ML) components have been increasingly integrated into the core systems of organizations.
One of the key challenges is the effective interaction between actors with different backgrounds who need to work closely together.
This paper presents an exploratory case study to understand the current interaction and collaboration dynamics between these roles in ML projects.
arXiv Detail & Related papers (2024-02-08T00:27:56Z) - Online Learning and Planning in Cognitive Hierarchies [10.28577981317938]
We extend an existing formal framework to model complex integrated reasoning behaviours of robotic systems.
New framework allows for a more flexible modelling of the interactions between different reasoning components.
arXiv Detail & Related papers (2023-10-18T23:53:51Z) - Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning [49.92517970237088]
We tackle the problem of training a robot to understand multimodal prompts.
This type of task poses a major challenge to robots' capability to understand the interconnection and complementarity between vision and language signals.
We introduce an effective framework that learns a policy to perform robot manipulation with multimodal prompts.
arXiv Detail & Related papers (2023-10-14T22:24:58Z) - Building Emotional Support Chatbots in the Era of LLMs [64.06811786616471]
We introduce an innovative methodology that synthesizes human insights with the computational prowess of Large Language Models (LLMs)
By utilizing the in-context learning potential of ChatGPT, we generate an ExTensible Emotional Support dialogue dataset, named ExTES.
Following this, we deploy advanced tuning techniques on the LLaMA model, examining the impact of diverse training strategies, ultimately yielding an LLM meticulously optimized for emotional support interactions.
arXiv Detail & Related papers (2023-08-17T10:49:18Z) - Semantic Interactive Learning for Text Classification: A Constructive
Approach for Contextual Interactions [0.0]
We propose a novel interaction framework called Semantic Interactive Learning for the text domain.
We frame the problem of incorporating constructive and contextual feedback into the learner as a task to find an architecture that enables more semantic alignment between humans and machines.
We introduce a technique called SemanticPush that is effective for translating conceptual corrections of humans to non-extrapolating training examples.
arXiv Detail & Related papers (2022-09-07T08:13:45Z) - Face-to-Face Contrastive Learning for Social Intelligence
Question-Answering [55.90243361923828]
multimodal methods have set the state of the art on many tasks, but have difficulty modeling the complex face-to-face conversational dynamics.
We propose Face-to-Face Contrastive Learning (F2F-CL), a graph neural network designed to model social interactions.
We experimentally evaluated the challenging Social-IQ dataset and show state-of-the-art results.
arXiv Detail & Related papers (2022-07-29T20:39:44Z) - Leveraging Explanations in Interactive Machine Learning: An Overview [10.284830265068793]
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities.
This paper presents an overview of research where explanations are combined with interactive capabilities.
arXiv Detail & Related papers (2022-07-29T07:46:11Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.