SoftAdapt: Techniques for Adaptive Loss Weighting of Neural Networks
with Multi-Part Loss Functions
- URL: http://arxiv.org/abs/1912.12355v1
- Date: Fri, 27 Dec 2019 22:23:16 GMT
- Title: SoftAdapt: Techniques for Adaptive Loss Weighting of Neural Networks
with Multi-Part Loss Functions
- Authors: A. Ali Heydari, Craig A. Thompson and Asif Mehmood
- Abstract summary: We propose a family of methods, called SoftAdapt, that dynamically change function weights for multi-part loss functions.
SoftAdapt is mathematically intuitive, computationally efficient and straightforward to implement.
- Score: 1.2891210250935146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adaptive loss function formulation is an active area of research and has
gained a great deal of popularity in recent years, following the success of
deep learning. However, existing frameworks of adaptive loss functions often
suffer from slow convergence and poor choice of weights for the loss
components. Traditionally, the elements of a multi-part loss function are
weighted equally or their weights are determined through heuristic approaches
that yield near-optimal (or sub-optimal) results. To address this problem, we
propose a family of methods, called SoftAdapt, that dynamically change function
weights for multi-part loss functions based on live performance statistics of
the component losses. SoftAdapt is mathematically intuitive, computationally
efficient and straightforward to implement. In this paper, we present the
mathematical formulation and pseudocode for SoftAdapt, along with results from
applying our methods to image reconstruction (Sparse Autoencoders) and
synthetic data generation (Introspective Variational Autoencoders).
Related papers
- Deep Learning Optimization Using Self-Adaptive Weighted Auxiliary Variables [20.09691024284159]
In this paper, we develop a new framework for learning via neural networks or physics-informed networks.
The robustness of our framework guarantees that the new loss helps optimize the original problem.
arXiv Detail & Related papers (2025-04-30T10:43:13Z) - Steerable Pyramid Weighted Loss: Multi-Scale Adaptive Weighting for Semantic Segmentation [0.4662017507844857]
We propose a novel steerable pyramid-based weighted (SPW) loss function that efficiently generates adaptive weight maps.
Our results demonstrate that the proposed SPW loss function achieves superior pixel precision and segmentation accuracy with minimal computational overhead.
arXiv Detail & Related papers (2025-03-09T13:15:01Z) - Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms [80.37846867546517]
We show how to train eight different neural networks with custom objectives.
We exploit their second-order information via their empirical Fisherssian matrices.
We apply Loss Lossiable algorithms to achieve significant improvements for less differentiable algorithms.
arXiv Detail & Related papers (2024-10-24T18:02:11Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - FourierLoss: Shape-Aware Loss Function with Fourier Descriptors [1.5659201748872393]
This work introduces a new shape-aware loss function, which we name FourierLoss.
It relies on the shape dissimilarity between the ground truth and the predicted segmentation maps through the Fourier descriptors calculated on their objects, and penalizing this dissimilarity in network training.
Experiments revealed that the proposed shape-aware loss function led to statistically significantly better results for liver segmentation, compared to its counterparts.
arXiv Detail & Related papers (2023-09-21T14:23:10Z) - Alternate Loss Functions for Classification and Robust Regression Can Improve the Accuracy of Artificial Neural Networks [6.452225158891343]
This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks.
Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed.
arXiv Detail & Related papers (2023-03-17T12:52:06Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - AutoLoss: Automated Loss Function Search in Recommendations [34.27873944762912]
We propose an AutoLoss framework that can automatically and adaptively search for the appropriate loss function from a set of candidates.
Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors.
arXiv Detail & Related papers (2021-06-12T08:15:00Z) - Searching for Robustness: Loss Learning for Noisy Classification Tasks [81.70914107917551]
We parameterize a flexible family of loss functions using Taylors and apply evolutionary strategies to search for noise-robust losses in this space.
The resulting white-box loss provides a simple and fast "plug-and-play" module that enables effective noise-robust learning in diverse downstream tasks.
arXiv Detail & Related papers (2021-02-27T15:27:22Z) - Loss Function Discovery for Object Detection via Convergence-Simulation
Driven Search [101.73248560009124]
We propose an effective convergence-simulation driven evolutionary search algorithm, CSE-Autoloss, for speeding up the search progress.
We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses.
Our experiments show that the best-discovered loss function combinations outperform default combinations by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors.
arXiv Detail & Related papers (2021-02-09T08:34:52Z) - Multi-Loss Weighting with Coefficient of Variations [19.37721431024278]
We propose a weighting scheme based on the coefficient of variations and set the weights based on properties observed while training the model.
The proposed method incorporates a measure of uncertainty to balance the losses, and as a result the loss weights evolve during training without requiring another (learning based) optimisation.
The validity of the approach is shown empirically for depth estimation and semantic segmentation on multiple datasets.
arXiv Detail & Related papers (2020-09-03T14:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.