Music Representing Corpus Virtual: An Open Sourced Library for
Explorative Music Generation, Sound Design, and Instrument Creation with
Artificial Intelligence and Machine Learning
- URL: http://arxiv.org/abs/2305.14948v1
- Date: Wed, 24 May 2023 09:36:04 GMT
- Title: Music Representing Corpus Virtual: An Open Sourced Library for
Explorative Music Generation, Sound Design, and Instrument Creation with
Artificial Intelligence and Machine Learning
- Authors: Christopher Johann Clarke
- Abstract summary: Music Representing Corpus Virtual (MRCV) is an open source software suite designed to explore the capabilities of Artificial Intelligence (AI) and Machine Learning (ML) in Music Generation, Sound Design, and Virtual Instrument Creation (MGSDIC)
The main aim of MRCV is to facilitate creativity, allowing users to customize input datasets for training the neural networks, and offering a range of options for each neural network.
The software is open source, meaning users can contribute to its development, and the community can collectively benefit from the insights and experience of other users.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Music Representing Corpus Virtual (MRCV) is an open source software suite
designed to explore the capabilities of Artificial Intelligence (AI) and
Machine Learning (ML) in Music Generation, Sound Design, and Virtual Instrument
Creation (MGSDIC). The software is accessible to users of varying levels of
experience, with an emphasis on providing an explorative approach to MGSDIC.
The main aim of MRCV is to facilitate creativity, allowing users to customize
input datasets for training the neural networks, and offering a range of
options for each neural network (thoroughly documented in the Github Wiki). The
software suite is designed to be accessible to musicians, audio professionals,
sound designers, and composers, regardless of their prior experience in AI or
ML. The documentation is prepared in such a way as to abstract technical
details, thereby making it easy to understand. The software is open source,
meaning users can contribute to its development, and the community can
collectively benefit from the insights and experience of other users.
Related papers
- SoundSignature: What Type of Music Do You Like? [0.0]
SoundSignature is a music application that integrates a custom OpenAI Assistant to analyze users' favorite songs.
The system incorporates state-of-the-art Music Information Retrieval (MIR) Python packages to combine extracted acoustic/musical features with the assistant's extensive knowledge of the artists and bands.
arXiv Detail & Related papers (2024-10-04T12:40:45Z) - A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - MusicAgent: An AI Agent for Music Understanding and Generation with
Large Language Models [54.55063772090821]
MusicAgent integrates numerous music-related tools and an autonomous workflow to address user requirements.
The primary goal of this system is to free users from the intricacies of AI-music tools, enabling them to concentrate on the creative aspect.
arXiv Detail & Related papers (2023-10-18T13:31:10Z) - MARBLE: Music Audio Representation Benchmark for Universal Evaluation [79.25065218663458]
We introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE.
It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description.
We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines.
arXiv Detail & Related papers (2023-06-18T12:56:46Z) - VRKitchen2.0-IndoorKit: A Tutorial for Augmented Indoor Scene Building
in Omniverse [77.52012928882928]
INDOORKIT is a built-in toolkit for NVIDIA OMNIVERSE.
It provides flexible pipelines for indoor scene building, scene randomizing, and animation controls.
arXiv Detail & Related papers (2022-06-23T17:53:33Z) - OmniXAI: A Library for Explainable AI [98.07381528393245]
We introduce OmniXAI, an open-source Python library of eXplainable AI (XAI)
It offers omni-way explainable AI capabilities and various interpretable machine learning techniques.
For practitioners, the library provides an easy-to-use unified interface to generate the explanations for their applications.
arXiv Detail & Related papers (2022-06-01T11:35:37Z) - Agents that Listen: High-Throughput Reinforcement Learning with Multiple
Sensory Systems [6.952659395337689]
We introduce a new version of VizDoom simulator to create a highly efficient learning environment that provides raw audio observations.
We train our agent to play the full game of Doom and find that it can consistently defeat a traditional vision-based adversary.
arXiv Detail & Related papers (2021-07-05T18:00:50Z) - SpeechBrain: A General-Purpose Speech Toolkit [73.0404642815335]
SpeechBrain is an open-source and all-in-one speech toolkit.
It is designed to facilitate the research and development of neural speech processing technologies.
It achieves competitive or state-of-the-art performance in a wide range of speech benchmarks.
arXiv Detail & Related papers (2021-06-08T18:22:56Z) - Research on AI Composition Recognition Based on Music Rules [7.699648754969773]
Article constructs a music-rule-identifying algorithm through extracting modes.
It will identify the stability of the mode of machine-generated music to judge whether it is artificial intelligent.
arXiv Detail & Related papers (2020-10-15T14:51:24Z) - Towards democratizing music production with AI-Design of Variational
Autoencoder-based Rhythm Generator as a DAW plugin [0.0]
This paper proposes a Variational AutoencoderciteKingma2014(VAE)-based rhythm generation system.
Musicians can train a deep learning model only by selecting target MIDI files, then generate various rhythms with the model.
arXiv Detail & Related papers (2020-04-01T10:50:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.