Distribution Learning Based on Evolutionary Algorithm Assisted Deep
Neural Networks for Imbalanced Image Classification
- URL: http://arxiv.org/abs/2207.12744v1
- Date: Tue, 26 Jul 2022 08:51:47 GMT
- Title: Distribution Learning Based on Evolutionary Algorithm Assisted Deep
Neural Networks for Imbalanced Image Classification
- Authors: Yudi Zhao, Kuangrong Hao, Chaochen Gu, Bing Wei
- Abstract summary: We propose an iMproved Estimation Distribution Algorithm based Latent featUre Distribution Evolution (MEDA_LUDE) algorithm.
Experiments on benchmark based imbalanced datasets validate the effectiveness of our proposed algorithm.
The MEDA_LUDE algorithm is also applied to the industrial field and successfully alleviates the imbalanced issue in fabric defect classification.
- Score: 4.037464966510278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To address the trade-off problem of quality-diversity for the generated
images in imbalanced classification tasks, we research on over-sampling based
methods at the feature level instead of the data level and focus on searching
the latent feature space for optimal distributions. On this basis, we propose
an iMproved Estimation Distribution Algorithm based Latent featUre Distribution
Evolution (MEDA_LUDE) algorithm, where a joint learning procedure is programmed
to make the latent features both optimized and evolved by the deep neural
networks and the evolutionary algorithm, respectively. We explore the effect of
the Large-margin Gaussian Mixture (L-GM) loss function on distribution learning
and design a specialized fitness function based on the similarities among
samples to increase diversity. Extensive experiments on benchmark based
imbalanced datasets validate the effectiveness of our proposed algorithm, which
can generate images with both quality and diversity. Furthermore, the MEDA_LUDE
algorithm is also applied to the industrial field and successfully alleviates
the imbalanced issue in fabric defect classification.
Related papers
- Quantized Hierarchical Federated Learning: A Robust Approach to
Statistical Heterogeneity [3.8798345704175534]
We present a novel hierarchical federated learning algorithm that incorporates quantization for communication-efficiency.
We offer a comprehensive analytical framework to evaluate its optimality gap and convergence rate.
Our findings reveal that our algorithm consistently achieves high learning accuracy over a range of parameters.
arXiv Detail & Related papers (2024-03-03T15:40:24Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - Affine-Transformation-Invariant Image Classification by Differentiable
Arithmetic Distribution Module [8.125023712173686]
Convolutional Neural Networks (CNNs) have achieved promising results in image classification.
CNNs are vulnerable to affine transformations including rotation, translation, flip and shuffle.
In this work, we introduce a more robust substitute by incorporating distribution learning techniques.
arXiv Detail & Related papers (2023-09-01T22:31:32Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Regularization Penalty Optimization for Addressing Data Quality Variance
in OoD Algorithms [45.02465532852302]
We theoretically reveal the relationship between training data quality and algorithm performance.
A novel algorithm is proposed to alleviate the influence of low-quality data at both the sample level and the domain level.
arXiv Detail & Related papers (2022-06-12T14:36:04Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.