Unified Batch Normalization: Identifying and Alleviating the Feature
Condensation in Batch Normalization and a Unified Framework
- URL: http://arxiv.org/abs/2311.15993v2
- Date: Fri, 8 Mar 2024 09:12:57 GMT
- Title: Unified Batch Normalization: Identifying and Alleviating the Feature
Condensation in Batch Normalization and a Unified Framework
- Authors: Shaobo Wang, Xiangdong Zhang, Dongrui Liu, Junchi Yan
- Abstract summary: Batch Normalization (BN) has become an essential technique in contemporary neural network design.
We propose a two-stage unified framework called Unified Batch Normalization (UBN)
UBN significantly enhances performance across different visual backbones and different vision tasks.
- Score: 55.22949690864962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Batch Normalization (BN) has become an essential technique in contemporary
neural network design, enhancing training stability. Specifically, BN employs
centering and scaling operations to standardize features along the batch
dimension and uses an affine transformation to recover features. Although
standard BN has shown its capability to improve deep neural network training
and convergence, it still exhibits inherent limitations in certain cases.
Current enhancements to BN typically address only isolated aspects of its
mechanism. In this work, we critically examine BN from a feature perspective,
identifying feature condensation during BN as a detrimental factor to test
performance. To tackle this problem, we propose a two-stage unified framework
called Unified Batch Normalization (UBN). In the first stage, we employ a
straightforward feature condensation threshold to mitigate condensation
effects, thereby preventing improper updates of statistical norms. In the
second stage, we unify various normalization variants to boost each component
of BN. Our experimental results reveal that UBN significantly enhances
performance across different visual backbones and different vision tasks, and
notably expedites network training convergence, particularly in early training
stages. Notably, our method improved about 3% in accuracy on ImageNet
classification and 4% in mean average precision on both Object Detection and
Instance Segmentation on COCO dataset, showing the effectiveness of our
approach in real-world scenarios.
Related papers
- Overcoming Recency Bias of Normalization Statistics in Continual
Learning: Balance and Adaptation [67.77048565738728]
Continual learning involves learning a sequence of tasks and balancing their knowledge appropriately.
We propose Adaptive Balance of BN (AdaB$2$N), which incorporates appropriately a Bayesian-based strategy to adapt task-wise contributions.
Our approach achieves significant performance gains across a wide range of benchmarks.
arXiv Detail & Related papers (2023-10-13T04:50:40Z) - Patch-aware Batch Normalization for Improving Cross-domain Robustness [55.06956781674986]
Cross-domain tasks present a challenge in which the model's performance will degrade when the training set and the test set follow different distributions.
We propose a novel method called patch-aware batch normalization (PBN)
By exploiting the differences between local patches of an image, our proposed PBN can effectively enhance the robustness of the model's parameters.
arXiv Detail & Related papers (2023-04-06T03:25:42Z) - An Adaptive Batch Normalization in Deep Learning [0.0]
Batch Normalization (BN) is a way to accelerate and stabilize training in deep convolutional neural networks.
We propose a threshold-based adaptive BN approach that separates the data that requires the BN and data that does not require it.
arXiv Detail & Related papers (2022-11-03T12:12:56Z) - Batch Normalization Explained [31.66311831317311]
We show that batch normalization (BN) boosts DN learning and inference performance.
BN is an unsupervised learning technique that adapts the geometry of a DN's spline partition to match the data.
We also show that the variation of BN statistics between mini-batches introduces a dropout-like random perturbation to the partition boundaries.
arXiv Detail & Related papers (2022-09-29T13:41:27Z) - Counterbalancing Teacher: Regularizing Batch Normalized Models for
Robustness [15.395021925719817]
Batch normalization (BN) is a technique for training deep neural networks that accelerates their convergence to reach higher accuracy.
We show that BN incentivizes the model to rely on low-variance features that are highly specific to the training (in-domain) data.
We propose Counterbalancing Teacher (CT) to enforce the student network's learning of robust representations.
arXiv Detail & Related papers (2022-07-04T16:16:24Z) - Rebalancing Batch Normalization for Exemplar-based Class-Incremental
Learning [23.621259845287824]
Batch Normalization (BN) has been extensively studied for neural nets in various computer vision tasks.
We develop a new update patch for BN, particularly tailored for the exemplar-based class-incremental learning (CIL)
arXiv Detail & Related papers (2022-01-29T11:03:03Z) - MimicNorm: Weight Mean and Last BN Layer Mimic the Dynamic of Batch
Normalization [60.36100335878855]
We propose a novel normalization method, named MimicNorm, to improve the convergence and efficiency in network training.
We leverage the neural kernel (NTK) theory to prove that our weight mean operation whitens activations and transits network into the chaotic regime like BN layer.
MimicNorm achieves similar accuracy for various network structures, including ResNets and lightweight networks like ShuffleNet, with a reduction of about 20% memory consumption.
arXiv Detail & Related papers (2020-10-19T07:42:41Z) - Double Forward Propagation for Memorized Batch Normalization [68.34268180871416]
Batch Normalization (BN) has been a standard component in designing deep neural networks (DNNs)
We propose a memorized batch normalization (MBN) which considers multiple recent batches to obtain more accurate and robust statistics.
Compared to related methods, the proposed MBN exhibits consistent behaviors in both training and inference.
arXiv Detail & Related papers (2020-10-10T08:48:41Z) - Towards Stabilizing Batch Statistics in Backward Propagation of Batch
Normalization [126.6252371899064]
Moving Average Batch Normalization (MABN) is a novel normalization method.
We show that MABN can completely restore the performance of vanilla BN in small batch cases.
Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO.
arXiv Detail & Related papers (2020-01-19T14:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.