On Box-Cox Transformation for Image Normality and Pattern Classification
- URL: http://arxiv.org/abs/2004.07210v3
- Date: Thu, 1 Oct 2020 13:13:17 GMT
- Title: On Box-Cox Transformation for Image Normality and Pattern Classification
- Authors: Abbas Cheddad
- Abstract summary: This paper revolves around the utility of such a tool as a pre-processing step to transform two-dimensional data.
We compare the effect of this light-weight Box-Cox transformation with well-established state-of-the-art low light image enhancement techniques.
We also demonstrate the effectiveness of our approach through several test-bed data sets for generic improvement of visual appearance of images.
- Score: 0.6548580592686074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A unique member of the power transformation family is known as the Box-Cox
transformation. The latter can be seen as a mathematical operation that leads
to finding the optimum lambda ({\lambda}) value that maximizes the
log-likelihood function to transform a data to a normal distribution and to
reduce heteroscedasticity. In data analytics, a normality assumption underlies
a variety of statistical test models. This technique, however, is best known in
statistical analysis to handle one-dimensional data. Herein, this paper
revolves around the utility of such a tool as a pre-processing step to
transform two-dimensional data, namely, digital images and to study its effect.
Moreover, to reduce time complexity, it suffices to estimate the parameter
lambda in real-time for large two-dimensional matrices by merely considering
their probability density function as a statistical inference of the underlying
data distribution. We compare the effect of this light-weight Box-Cox
transformation with well-established state-of-the-art low light image
enhancement techniques. We also demonstrate the effectiveness of our approach
through several test-bed data sets for generic improvement of visual appearance
of images and for ameliorating the performance of a colour pattern
classification algorithm as an example application. Results with and without
the proposed approach, are compared using the AlexNet (transfer deep learning)
pretrained model. To the best of our knowledge, this is the first time that the
Box-Cox transformation is extended to digital images by exploiting histogram
transformation.
Related papers
- Variable-size Symmetry-based Graph Fourier Transforms for image compression [65.7352685872625]
We propose a new family of Symmetry-based Graph Fourier Transforms of variable sizes into a coding framework.
Our proposed algorithm generates symmetric graphs on the grid by adding specific symmetrical connections between nodes.
Experiments show that SBGFTs outperform the primary transforms integrated in the explicit Multiple Transform Selection.
arXiv Detail & Related papers (2024-11-24T13:00:44Z) - Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - Linear Anchored Gaussian Mixture Model for Location and Width Computations of Objects in Thick Line Shape [1.7205106391379021]
3D image gray level representation is considered as a finite mixture model of a statistical distribution.
Expectation-Maximization algorithm (Algo1) using the original image as input data is used to estimate the model parameters.
modified EM algorithm (Algo2) is detailed.
arXiv Detail & Related papers (2024-04-03T20:05:00Z) - NaturalInversion: Data-Free Image Synthesis Improving Real-World
Consistency [1.1470070927586016]
We introduce NaturalInversion, a novel model inversion-based method to synthesize images that agrees well with the original data distribution without using real data.
We show that our images are more consistent with original data distribution than prior works by visualization and additional analysis.
arXiv Detail & Related papers (2023-06-29T03:43:29Z) - T-ADAF: Adaptive Data Augmentation Framework for Image Classification
Network based on Tensor T-product Operator [0.0]
This paper proposes an Adaptive Data Augmentation Framework based on the tensor T-product Operator.
It triples one image data to be trained and gain the result from all these three images together with only less than 0.1% increase in the number of parameters.
Numerical experiments show that our data augmentation framework can improve the performance of original neural network model by 2%.
arXiv Detail & Related papers (2023-06-07T08:30:44Z) - Optimizing transformations for contrastive learning in a differentiable
framework [4.828899860513713]
We propose a framework to find optimal transformations for contrastive learning using a differentiable transformation network.
Our method increases performances at low annotated data regime both in supervision accuracy and in convergence speed.
Experiments were performed on 34000 2D slices of brain Magnetic Resonance Images and 11200 chest X-ray images.
arXiv Detail & Related papers (2022-07-27T08:47:57Z) - GradViT: Gradient Inversion of Vision Transformers [83.54779732309653]
We demonstrate the vulnerability of vision transformers (ViTs) to gradient-based inversion attacks.
We introduce a method, named GradViT, that optimize random noise into naturally looking images.
We observe unprecedentedly high fidelity and closeness to the original (hidden) data.
arXiv Detail & Related papers (2022-03-22T17:06:07Z) - Hybrid Model-based / Data-driven Graph Transform for Image Coding [54.31406300524195]
We present a hybrid model-based / data-driven approach to encode an intra-prediction residual block.
The first $K$ eigenvectors of a transform matrix are derived from a statistical model, e.g., the asymmetric discrete sine transform (ADST) for stability.
Using WebP as a baseline image, experimental results show that our hybrid graph transform achieved better energy compaction than default discrete cosine transform (DCT) and better stability than KLT.
arXiv Detail & Related papers (2022-03-02T15:36:44Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning [64.32306537419498]
We propose a novel learned feature-based refinement and augmentation method that produces a varied set of complex transformations.
These transformations also use information from both within-class and across-class representations that we extract through clustering.
We demonstrate that our method is comparable to current state of art for smaller datasets while being able to scale up to larger datasets.
arXiv Detail & Related papers (2020-07-16T17:55:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.