Real-world Music Plagiarism Detection With Music Segment Transcription System
- URL: http://arxiv.org/abs/2509.08282v1
- Date: Wed, 10 Sep 2025 04:55:48 GMT
- Title: Real-world Music Plagiarism Detection With Music Segment Transcription System
- Authors: Seonghyeon Go,
- Abstract summary: We propose a system for detecting music plagiarism by combining various MIR technologies.<n>We developed a music segment transcription system that extracts musically meaningful segments from audio recordings to detect plagiarism.<n>We also collected a Similar Music Pair dataset for musical similarity research using real-world cases.
- Score: 0.46412974300322135
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As a result of continuous advances in Music Information Retrieval (MIR) technology, generating and distributing music has become more diverse and accessible. In this context, interest in music intellectual property protection is increasing to safeguard individual music copyrights. In this work, we propose a system for detecting music plagiarism by combining various MIR technologies. We developed a music segment transcription system that extracts musically meaningful segments from audio recordings to detect plagiarism across different musical formats. With this system, we compute similarity scores based on multiple musical features that can be evaluated through comprehensive musical analysis. Our approach demonstrated promising results in music plagiarism detection experiments, and the proposed method can be applied to real-world music scenarios. We also collected a Similar Music Pair (SMP) dataset for musical similarity research using real-world cases. The dataset are publicly available.
Related papers
- Understanding Human Perception of Music Plagiarism Through a Computational Approach [4.404667592877916]
We focus on the three commonly used musical features in similarity analysis: melody, rhythm, and chord progression.<n>We propose a LLM-as-a-judge framework that applies a systematic, step-by-step approach.
arXiv Detail & Related papers (2026-01-05T22:37:19Z) - Segment Transformer: AI-Generated Music Detection via Music Structural Analysis [1.7034813545878587]
We aim to improve the accuracy of AIGM detection by analyzing the structural patterns of music segments.<n>Specifically, to extract musical features from short audio clips, we integrated various pre-trained models.<n>For long audio, we developed a segment transformer that divides music into segments and learns inter-segment relationships.
arXiv Detail & Related papers (2025-09-10T04:56:40Z) - Detecting Musical Deepfakes [0.0]
This study investigates the detection of AI-generated songs using the FakeMusicCaps dataset.<n>To simulate real-world adversarial conditions, tempo stretching and pitch shifting were applied to the dataset.<n>Mel spectrograms were generated from the modified audio, then used to train and evaluate a convolutional neural network.
arXiv Detail & Related papers (2025-05-03T21:45:13Z) - Enriching Music Descriptions with a Finetuned-LLM and Metadata for Text-to-Music Retrieval [7.7464988473650935]
Text-to-Music Retrieval plays a pivotal role in content discovery within extensive music databases.
This paper proposes an improved Text-to-Music Retrieval model, denoted as TTMR++.
arXiv Detail & Related papers (2024-10-04T09:33:34Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - Knowledge-based Multimodal Music Similarity [0.0]
This research focuses on the study of musical similarity using both symbolic and audio content.
The aim of this research is to develop a fully explainable and interpretable system that can provide end-users with more control and understanding of music similarity and classification systems.
arXiv Detail & Related papers (2023-06-21T13:12:12Z) - MARBLE: Music Audio Representation Benchmark for Universal Evaluation [79.25065218663458]
We introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE.
It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description.
We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines.
arXiv Detail & Related papers (2023-06-18T12:56:46Z) - Proceedings of the 4th International Workshop on Reading Music Systems [75.24366528496427]
The workshop tries to connect researchers who develop systems for reading music with other researchers and practitioners that could benefit from such systems.
The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition.
These are the proceedings of the 4th International Workshop on Reading Music Systems, held online on Nov. 18th 2022.
arXiv Detail & Related papers (2022-11-23T20:16:45Z) - A Dataset for Greek Traditional and Folk Music: Lyra [69.07390994897443]
This paper presents a dataset for Greek Traditional and Folk music that includes 1570 pieces, summing in around 80 hours of data.
The dataset incorporates YouTube timestamped links for retrieving audio and video, along with rich metadata information with regards to instrumentation, geography and genre.
arXiv Detail & Related papers (2022-11-21T14:15:43Z) - Research on AI Composition Recognition Based on Music Rules [7.699648754969773]
Article constructs a music-rule-identifying algorithm through extracting modes.
It will identify the stability of the mode of machine-generated music to judge whether it is artificial intelligent.
arXiv Detail & Related papers (2020-10-15T14:51:24Z) - Multi-Modal Music Information Retrieval: Augmenting Audio-Analysis with
Visual Computing for Improved Music Video Analysis [91.3755431537592]
This thesis combines audio-analysis with computer vision to approach Music Information Retrieval (MIR) tasks from a multi-modal perspective.
The main hypothesis of this work is based on the observation that certain expressive categories such as genre or theme can be recognized on the basis of the visual content alone.
The experiments are conducted for three MIR tasks Artist Identification, Music Genre Classification and Cross-Genre Classification.
arXiv Detail & Related papers (2020-02-01T17:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.