Fast Mesh Data Augmentation via Chebyshev Polynomial of Spectral
filtering
- URL: http://arxiv.org/abs/2010.02811v1
- Date: Tue, 6 Oct 2020 15:18:26 GMT
- Title: Fast Mesh Data Augmentation via Chebyshev Polynomial of Spectral
filtering
- Authors: Shih-Gu Huang, Moo K. Chung, Anqi Qiu, and Alzheimer's Disease
Neuroimaging Initiative
- Abstract summary: Deep neural networks have been recognized as one of the powerful learning techniques in computer vision and medical image analysis.
In practice, there is often insufficient training data available and augmentation is used to expand the dataset.
This study proposes two unbiased augmentation methods, Laplace-Beltrami eigenfunction Data Augmentation (LB-eigDA) and Chebyshev Data Augmentation (C-pDA)
- Score: 5.594792814661452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have recently been recognized as one of the powerful
learning techniques in computer vision and medical image analysis. Trained deep
neural networks need to be generalizable to new data that was not seen before.
In practice, there is often insufficient training data available and
augmentation is used to expand the dataset. Even though graph convolutional
neural network (graph-CNN) has been widely used in deep learning, there is a
lack of augmentation methods to generate data on graphs or surfaces. This study
proposes two unbiased augmentation methods, Laplace-Beltrami eigenfunction Data
Augmentation (LB-eigDA) and Chebyshev polynomial Data Augmentation (C-pDA), to
generate new data on surfaces, whose mean is the same as that of real data.
LB-eigDA augments data via the resampling of the LB coefficients. In parallel
with LB-eigDA, we introduce a fast augmentation approach, C-pDA, that employs a
polynomial approximation of LB spectral filters on surfaces. We design LB
spectral bandpass filters by Chebyshev polynomial approximation and resample
signals filtered via these filters to generate new data on surfaces. We first
validate LB-eigDA and C-pDA via simulated data and demonstrate their use for
improving classification accuracy. We then employ the brain images of
Alzheimer's Disease Neuroimaging Initiative (ADNI) and extract cortical
thickness that is represented on the cortical surface to illustrate the use of
the two augmentation methods. We demonstrate that augmented cortical thickness
has a similar pattern to real data. Second, we show that C-pDA is much faster
than LB-eigDA. Last, we show that C-pDA can improve the AD classification
accuracy of graph-CNN.
Related papers
- Data Augmentation Scheme for Raman Spectra with Highly Correlated
Annotations [0.23090185577016453]
We exploit the additive nature of spectra in order to generate additional data points from a given dataset that have statistically independent labels.
We show that training a CNN on these generated data points improves the performance on datasets where the annotations do not bear the same correlation as the dataset that was used for model training.
arXiv Detail & Related papers (2024-02-01T18:46:28Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - Revisiting convolutional neural network on graphs with polynomial
approximations of Laplace-Beltrami spectral filtering [6.111909222842263]
This paper revisits spectral graphal neural networks (graph-CNNs) given in Defferrard.
We develop the Laplace-Beltrami CNN (LBCNN) by replacing the graph Laplacian with the LB operator.
arXiv Detail & Related papers (2020-10-26T01:18:05Z) - Transfer Learning and SpecAugment applied to SSVEP Based BCI
Classification [1.9336815376402716]
We use deep convolutional neural networks (DCNNs) to classify EEG signals in a single-channel brain-computer interface (BCI)
EEG signals were converted to spectrograms and served as input to train DCNNs using the transfer learning technique.
arXiv Detail & Related papers (2020-10-08T00:30:12Z) - A Systematic Approach to Featurization for Cancer Drug Sensitivity
Predictions with Deep Learning [49.86828302591469]
We train >35,000 neural network models, sweeping over common featurization techniques.
We found the RNA-seq to be highly redundant and informative even with subsets larger than 128 features.
arXiv Detail & Related papers (2020-04-30T20:42:17Z) - A Deep Convolutional Neural Network for COVID-19 Detection Using Chest
X-Rays [2.2843885788439797]
We present image classifiers based on Dense Convolutional Networks and transfer learning to classify chest X-ray images according to three labels: COVID-19, pneumonia and normal.
We were able to reach test accuracy of 100% on our test dataset.
arXiv Detail & Related papers (2020-04-30T13:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.