Efficient Gradient Estimation via Adaptive Sampling and Importance
Sampling
- URL: http://arxiv.org/abs/2311.14468v2
- Date: Mon, 27 Nov 2023 08:04:04 GMT
- Title: Efficient Gradient Estimation via Adaptive Sampling and Importance
Sampling
- Authors: Corentin Sala\"un, Xingchang Huang, Iliyan Georgiev, Niloy J. Mitra,
Gurprit Singh
- Abstract summary: adaptive or importance sampling reduces noise in gradient estimation.
We present an algorithm that can incorporate existing importance functions into our framework.
We observe improved convergence in classification and regression tasks with minimal computational overhead.
- Score: 34.50693643119071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning problems rely heavily on stochastic gradient descent (SGD)
for optimization. The effectiveness of SGD is contingent upon accurately
estimating gradients from a mini-batch of data samples. Instead of the commonly
used uniform sampling, adaptive or importance sampling reduces noise in
gradient estimation by forming mini-batches that prioritize crucial data
points. Previous research has suggested that data points should be selected
with probabilities proportional to their gradient norm. Nevertheless, existing
algorithms have struggled to efficiently integrate importance sampling into
machine learning frameworks. In this work, we make two contributions. First, we
present an algorithm that can incorporate existing importance functions into
our framework. Second, we propose a simplified importance function that relies
solely on the loss gradient of the output layer. By leveraging our proposed
gradient estimation techniques, we observe improved convergence in
classification and regression tasks with minimal computational overhead. We
validate the effectiveness of our adaptive and importance-sampling approach on
image and point-cloud datasets.
Related papers
- FLOPS: Forward Learning with OPtimal Sampling [1.694989793927645]
gradient-based computation methods have recently gained focus for learning with only forward passes, also referred to as queries.
Conventional forward learning consumes enormous queries on each data point for accurate gradient estimation through Monte Carlo sampling.
We propose to allocate the optimal number of queries over each data in one batch during training to achieve a good balance between estimation accuracy and computational efficiency.
arXiv Detail & Related papers (2024-10-08T12:16:12Z) - SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection [53.19618419772467]
Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds.
With the development of Transformer, the scale of SIRST models is constantly increasing.
With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed.
arXiv Detail & Related papers (2024-03-08T16:14:54Z) - Delving into Effective Gradient Matching for Dataset Condensation [13.75957901381024]
gradient matching method directly targets the training dynamics by matching the gradient when training on the original and synthetic datasets.
We propose to match the multi-level gradients to involve both intra-class and inter-class gradient information.
An overfitting-aware adaptive learning step strategy is also proposed to trim unnecessary optimization steps for algorithmic efficiency improvement.
arXiv Detail & Related papers (2022-07-30T21:31:10Z) - Adaptive Sketches for Robust Regression with Importance Sampling [64.75899469557272]
We introduce data structures for solving robust regression through gradient descent (SGD)
Our algorithm effectively runs $T$ steps of SGD with importance sampling while using sublinear space and just making a single pass over the data.
arXiv Detail & Related papers (2022-07-16T03:09:30Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.