Deep Learning Training Procedure Augmentations
- URL: http://arxiv.org/abs/2211.14395v1
- Date: Fri, 25 Nov 2022 22:31:11 GMT
- Title: Deep Learning Training Procedure Augmentations
- Authors: Cristian Simionescu
- Abstract summary: Recent advances in Deep Learning have greatly improved performance on various tasks such as object detection, image segmentation, sentiment analysis.
While this has lead to great results, many of which with real-world applications, other relevant aspects of deep learning have remained neglected and unknown.
We will present several novel deep learning training techniques which, while capable of offering significant performance gains, also reveal several interesting analysis results regarding convergence speed, optimization landscape, and adversarial robustness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Deep Learning have greatly improved performance on various
tasks such as object detection, image segmentation, sentiment analysis. The
focus of most research directions up until very recently has been on beating
state-of-the-art results. This has materialized in the utilization of bigger
and bigger models and techniques which help the training procedure to extract
more predictive power out of a given dataset. While this has lead to great
results, many of which with real-world applications, other relevant aspects of
deep learning have remained neglected and unknown. In this work, we will
present several novel deep learning training techniques which, while capable of
offering significant performance gains they also reveal several interesting
analysis results regarding convergence speed, optimization landscape
smoothness, and adversarial robustness. The methods presented in this work are
the following:
$\bullet$ Perfect Ordering Approximation; a generalized model agnostic
curriculum learning approach. The results show the effectiveness of the
technique for improving training time as well as offer some new insight into
the training process of deep networks.
$\bullet$ Cascading Sum Augmentation; an extension of mixup capable of
utilizing more data points for linear interpolation by leveraging a smoother
optimization landscape. This can be used for computer vision tasks in order to
improve both prediction performance as well as improve passive model
robustness.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Accelerating Deep Learning with Fixed Time Budget [2.190627491782159]
This paper proposes an effective technique for training arbitrary deep learning models within fixed time constraints.
The proposed method is extensively evaluated in both classification and regression tasks in computer vision.
arXiv Detail & Related papers (2024-10-03T21:18:04Z) - Efficient Human Pose Estimation: Leveraging Advanced Techniques with MediaPipe [5.439359582541082]
This study presents significant enhancements in human pose estimation using the MediaPipe framework.
The research focuses on improving accuracy, computational efficiency, and real-time processing capabilities.
The advancements have wide-ranging applications in augmented reality, sports analytics, and healthcare.
arXiv Detail & Related papers (2024-06-21T21:00:45Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Improving Pre-Trained Weights Through Meta-Heuristics Fine-Tuning [0.0]
We propose to use meta-heuristic techniques to fine-tune pre-trained weights.
Experimental results show nature-inspired algorithms' capacity in exploring the neighborhood of pre-trained weights.
arXiv Detail & Related papers (2022-12-19T13:40:26Z) - Training Efficiency and Robustness in Deep Learning [2.6451769337566406]
We study approaches to improve the training efficiency and robustness of deep learning models.
We find that prioritizing learning on more informative training data increases convergence speed and improves generalization performance on test data.
We show that a redundancy-aware modification to the sampling of training data improves the training speed and develops an efficient method for detecting the diversity of training signal.
arXiv Detail & Related papers (2021-12-02T17:11:33Z) - Powerpropagation: A sparsity inducing weight reparameterisation [65.85142037667065]
We introduce Powerpropagation, a new weight- parameterisation for neural networks that leads to inherently sparse models.
Models trained in this manner exhibit similar performance, but have a distribution with markedly higher density at zero, allowing more parameters to be pruned safely.
Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark.
arXiv Detail & Related papers (2021-10-01T10:03:57Z) - Efficient Deep Learning: A Survey on Making Deep Learning Models
Smaller, Faster, and Better [0.0]
With the progressive improvements in deep learning models, their number of parameters, latency, resources required to train, etc. have increased significantly.
We present and motivate the problem of efficiency in deep learning, followed by a thorough survey of the five core areas of model efficiency.
We believe this is the first comprehensive survey in the efficient deep learning space that covers the landscape of model efficiency from modeling techniques to hardware support.
arXiv Detail & Related papers (2021-06-16T17:31:38Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - Large Batch Training Does Not Need Warmup [111.07680619360528]
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications.
In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training.
Based on our analysis, we bridge the gap and illustrate the theoretical insights for three popular large-batch training techniques.
arXiv Detail & Related papers (2020-02-04T23:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.