Unified Gradient Reweighting for Model Biasing with Applications to
Source Separation
- URL: http://arxiv.org/abs/2010.13228v1
- Date: Sun, 25 Oct 2020 21:41:45 GMT
- Title: Unified Gradient Reweighting for Model Biasing with Applications to
Source Separation
- Authors: Efthymios Tzinis, Dimitrios Bralios, Paris Smaragdis
- Abstract summary: We propose a simple, unified gradient reweighting scheme to bias the learning process of a model and steer it towards a certain distribution of results.
We apply this method to various source separation tasks, in order to shift the operating point of the models towards different objectives.
Our framework enables the user to control a robustness trade-off between worst and average performance.
- Score: 27.215800308343322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent deep learning approaches have shown great improvement in audio source
separation tasks. However, the vast majority of such work is focused on
improving average separation performance, often neglecting to examine or
control the distribution of the results. In this paper, we propose a simple,
unified gradient reweighting scheme, with a lightweight modification to bias
the learning process of a model and steer it towards a certain distribution of
results. More specifically, we reweight the gradient updates of each batch,
using a user-specified probability distribution. We apply this method to
various source separation tasks, in order to shift the operating point of the
models towards different objectives. We demonstrate different parameterizations
of our unified reweighting scheme can be used towards addressing several
real-world problems, such as unreliable separation estimates. Our framework
enables the user to control a robustness trade-off between worst and average
performance. Moreover, we experimentally show that our unified reweighting
scheme can also be used in order to shift the focus of the model towards being
more accurate for user-specified sound classes or even towards easier examples
in order to enable faster convergence.
Related papers
- Quantifying Uncertainty and Variability in Machine Learning: Confidence Intervals for Quantiles in Performance Metric Distributions [0.17265013728931003]
Machine learning models are widely used in applications where reliability and robustness are critical.
Model evaluation often relies on single-point estimates of performance metrics that fail to capture the inherent variability in model performance.
This contribution explores the use of quantiles and confidence intervals to analyze such distributions, providing a more complete understanding of model performance and its uncertainty.
arXiv Detail & Related papers (2025-01-28T13:21:34Z) - Improving Predictor Reliability with Selective Recalibration [15.319277333431318]
Recalibration is one of the most effective ways to produce reliable confidence estimates with a pre-trained model.
We propose textitselective recalibration, where a selection model learns to reject some user-chosen proportion of the data.
Our results show that selective recalibration consistently leads to significantly lower calibration error than a wide range of selection and recalibration baselines.
arXiv Detail & Related papers (2024-10-07T18:17:31Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Distributionally Robust Post-hoc Classifiers under Prior Shifts [31.237674771958165]
We investigate the problem of training models that are robust to shifts caused by changes in the distribution of class-priors or group-priors.
We present an extremely lightweight post-hoc approach that performs scaling adjustments to predictions from a pre-trained model.
arXiv Detail & Related papers (2023-09-16T00:54:57Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.