Continuous Weight Balancing
- URL: http://arxiv.org/abs/2103.16591v1
- Date: Tue, 30 Mar 2021 18:03:12 GMT
- Title: Continuous Weight Balancing
- Authors: Daniel J. Wu, Avoy Datta
- Abstract summary: We propose a simple method by which to choose sample weights for problems with highly imbalanced or skewed traits.
We derive sample weights from the transfer function between an estimated source and specified target distributions.
Our method outperforms both unweighted and discretely-weighted models on both regression and classification tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a simple method by which to choose sample weights for problems
with highly imbalanced or skewed traits. Rather than naively discretizing
regression labels to find binned weights, we take a more principled approach --
we derive sample weights from the transfer function between an estimated source
and specified target distributions. Our method outperforms both unweighted and
discretely-weighted models on both regression and classification tasks. We also
open-source our implementation of this method
(https://github.com/Daniel-Wu/Continuous-Weight-Balancing) to the scientific
community.
Related papers
- Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Value-aware Importance Weighting for Off-policy Reinforcement Learning [11.3798693158017]
Importance sampling is a central idea underlying off-policy prediction in reinforcement learning.
In this work, we consider a broader class of importance weights to correct samples in off-policy learning.
We derive how such weights can be computed, and detail key properties of the resulting importance weights.
arXiv Detail & Related papers (2023-06-27T17:05:22Z) - Exploring Weight Balancing on Long-Tailed Recognition Problem [32.01426831450348]
Recognition problems in long-tailed data, in which the sample size per class is heavily skewed, have gained importance.
Weight balancing, which combines classical regularization techniques with two-stage training, has been proposed.
We analyze weight balancing by focusing on neural collapse and the cone effect at each training stage.
arXiv Detail & Related papers (2023-05-26T01:45:19Z) - Generalized Balancing Weights via Deep Neural Networks [0.0]
Estimating causal effects from observational data is a central problem in many domains.
We present generalized balancing weights, Neural Balancing Weights (NBW), to estimate the causal effects of an arbitrary mixture of discrete and continuous interventions.
arXiv Detail & Related papers (2022-11-14T17:03:56Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Residual-Quantile Adjustment for Adaptive Training of Physics-informed
Neural Network [2.5769426017309915]
In this paper, we show that the bottleneck in the adaptive choice of samples for training efficiency is the behavior of the tail distribution of the numerical residual.
We propose the Residual-Quantile Adjustment (RQA) method for a better weight choice for each training sample.
Experiment results show that the proposed method can outperform several adaptive methods on various partial differential equation (PDE) problems.
arXiv Detail & Related papers (2022-09-09T12:39:38Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Counterfactual Representation Learning with Balancing Weights [74.67296491574318]
Key to causal inference with observational data is achieving balance in predictive features associated with each treatment type.
Recent literature has explored representation learning to achieve this goal.
We develop an algorithm for flexible, scalable and accurate estimation of causal effects.
arXiv Detail & Related papers (2020-10-23T19:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.