Context Normalization Layer with Applications
- URL: http://arxiv.org/abs/2303.07651v2
- Date: Fri, 2 Feb 2024 21:23:18 GMT
- Title: Context Normalization Layer with Applications
- Authors: Bilal Faye, Mohamed-Djallel Dilmi, Hanane Azzag, Mustapha Lebbah,
Djamel Bouchaffra
- Abstract summary: This study proposes a new normalization technique, called context normalization, for image data.
It adjusts the scaling of features based on the characteristics of each sample, which improves the model's convergence speed and performance.
The effectiveness of context normalization is demonstrated on various datasets, and its performance is compared to other standard normalization techniques.
- Score: 0.1499944454332829
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Normalization is a pre-processing step that converts the data into a more
usable representation. As part of the deep neural networks (DNNs), the batch
normalization (BN) technique uses normalization to address the problem of
internal covariate shift. It can be packaged as general modules, which have
been extensively integrated into various DNNs, to stabilize and accelerate
training, presumably leading to improved generalization. However, the effect of
BN is dependent on the mini-batch size and it does not take into account any
groups or clusters that may exist in the dataset when estimating population
statistics. This study proposes a new normalization technique, called context
normalization, for image data. This approach adjusts the scaling of features
based on the characteristics of each sample, which improves the model's
convergence speed and performance by adapting the data values to the context of
the target task. The effectiveness of context normalization is demonstrated on
various datasets, and its performance is compared to other standard
normalization techniques.
Related papers
- Adaptative Context Normalization: A Boost for Deep Learning in Image Processing [0.07499722271664146]
Adaptative Context Normalization (ACN) is a novel supervised approach that introduces the concept of "context"
ACN ensures speed, convergence, and superior performance compared to BN and MN.
arXiv Detail & Related papers (2024-09-07T08:18:10Z) - Supervised Batch Normalization [0.08192907805418585]
Batch Normalization (BN) is a widely-used technique in neural networks.
We propose Supervised Batch Normalization (SBN), a pioneering approach.
We define contexts as modes, categorizing data with similar characteristics.
arXiv Detail & Related papers (2024-05-27T10:30:21Z) - Enhancing Neural Network Representations with Prior Knowledge-Based Normalization [0.07499722271664146]
We introduce a new approach to multi-mode normalization that leverages prior knowledge to improve neural network representations.
Our methods demonstrate superior convergence and performance across tasks in image classification, domain adaptation, and image generation.
arXiv Detail & Related papers (2024-03-25T14:17:38Z) - BCN: Batch Channel Normalization for Image Classification [13.262032378453073]
This paper presents a novel normalization technique called Batch Channel Normalization (BCN)
As a basic block, BCN can be easily integrated into existing models for various applications in the field of computer vision.
arXiv Detail & Related papers (2023-12-01T14:01:48Z) - Batch Layer Normalization, A new normalization layer for CNNs and RNN [0.0]
This study introduces a new normalization layer termed Batch Layer Normalization (BLN)
As a combined version of batch and layer normalization, BLN adaptively puts appropriate weight on mini-batch and feature normalization based on the inverse size of mini-batches.
Test results indicate the application potential of BLN and its faster convergence than batch normalization and layer normalization in both Convolutional and Recurrent Neural Networks.
arXiv Detail & Related papers (2022-09-19T10:12:51Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Test-time Batch Statistics Calibration for Covariate Shift [66.7044675981449]
We propose to adapt the deep models to the novel environment during inference.
We present a general formulation $alpha$-BN to calibrate the batch statistics.
We also present a novel loss function to form a unified test time adaptation framework Core.
arXiv Detail & Related papers (2021-10-06T08:45:03Z) - Metadata Normalization [54.43363251520749]
Batch Normalization (BN) normalizes feature distributions by standardizing with batch statistics.
BN does not correct the influence on features from extraneous variables or multiple distributions.
We introduce the Metadata Normalization layer, a new batch-level operation which can be used end-to-end within the training framework.
arXiv Detail & Related papers (2021-04-19T05:10:26Z) - Double Forward Propagation for Memorized Batch Normalization [68.34268180871416]
Batch Normalization (BN) has been a standard component in designing deep neural networks (DNNs)
We propose a memorized batch normalization (MBN) which considers multiple recent batches to obtain more accurate and robust statistics.
Compared to related methods, the proposed MBN exhibits consistent behaviors in both training and inference.
arXiv Detail & Related papers (2020-10-10T08:48:41Z) - Optimization Theory for ReLU Neural Networks Trained with Normalization
Layers [82.61117235807606]
The success of deep neural networks in part due to the use of normalization layers.
Our analysis shows how the introduction of normalization changes the landscape and can enable faster activation.
arXiv Detail & Related papers (2020-06-11T23:55:54Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.