Tempo vs. Pitch: understanding self-supervised tempo estimation
- URL: http://arxiv.org/abs/2304.06868v1
- Date: Fri, 14 Apr 2023 00:08:08 GMT
- Title: Tempo vs. Pitch: understanding self-supervised tempo estimation
- Authors: Giovana Morais, Matthew E. P. Davies, Marcelo Queiroz, and Magdalena
Fuentes
- Abstract summary: Self-supervision methods learn representations by solving pretext tasks that do not require human-generated labels.
We study the relationship between the input representation and data distribution for self-supervised tempo estimation.
- Score: 0.783970968131292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervision methods learn representations by solving pretext tasks that
do not require human-generated labels, alleviating the need for time-consuming
annotations. These methods have been applied in computer vision, natural
language processing, environmental sound analysis, and recently in music
information retrieval, e.g. for pitch estimation. Particularly in the context
of music, there are few insights about the fragility of these models regarding
different distributions of data, and how they could be mitigated. In this
paper, we explore these questions by dissecting a self-supervised model for
pitch estimation adapted for tempo estimation via rigorous experimentation with
synthetic data. Specifically, we study the relationship between the input
representation and data distribution for self-supervised tempo estimation.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - From Link Prediction to Forecasting: Addressing Challenges in Batch-based Temporal Graph Learning [0.716879432974126]
We show that the suitability of common batch-oriented evaluation depends on the datasets' characteristics.
For continuous-time temporal graphs, fixed-size batches create time windows with different durations, resulting in an inconsistent dynamic link prediction task.
For discrete-time temporal graphs, the sequence of batches can additionally introduce temporal dependencies that are not present in the data.
arXiv Detail & Related papers (2024-06-07T12:45:12Z) - A Survey on Diffusion Models for Time Series and Spatio-Temporal Data [92.1255811066468]
We review the use of diffusion models in time series and S-temporal data, categorizing them by model, task type, data modality, and practical application domain.
We categorize diffusion models into unconditioned and conditioned types discuss time series and S-temporal data separately.
Our survey covers their application extensively in various fields including healthcare, recommendation, climate, energy, audio, and transportation.
arXiv Detail & Related papers (2024-04-29T17:19:40Z) - Tempo estimation as fully self-supervised binary classification [6.255143207183722]
We propose a fully self-supervised approach that does not rely on any human labeled data.
Our method builds on the fact that generic (music) audio embeddings already encode a variety of properties, including information about tempo.
arXiv Detail & Related papers (2024-01-17T00:15:16Z) - Bring Your Own Data! Self-Supervised Evaluation for Large Language
Models [52.15056231665816]
We propose a framework for self-supervised evaluation of Large Language Models (LLMs)
We demonstrate self-supervised evaluation strategies for measuring closed-book knowledge, toxicity, and long-range context dependence.
We find strong correlations between self-supervised and human-supervised evaluations.
arXiv Detail & Related papers (2023-06-23T17:59:09Z) - Interpretation of Time-Series Deep Models: A Survey [27.582644914283136]
We present a wide range of post-hoc interpretation methods for time-series models based on backpropagation, perturbation, and approximation.
We also want to bring focus onto inherently interpretable models, a novel category of interpretation where human-understandable information is designed within the models.
arXiv Detail & Related papers (2023-05-23T23:43:26Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Evaluation of Local Explanation Methods for Multivariate Time Series
Forecasting [0.21094707683348418]
Local interpretability is important in determining why a model makes particular predictions.
Despite the recent focus on AI interpretability, there has been a lack of research in local interpretability methods for time series forecasting.
arXiv Detail & Related papers (2020-09-18T21:15:28Z) - TSInsight: A local-global attribution framework for interpretability in
time-series data [5.174367472975529]
We propose an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty.
TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant.
In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations.
arXiv Detail & Related papers (2020-04-06T19:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.