Estimation of the Mean Function of Functional Data via Deep Neural
Networks
- URL: http://arxiv.org/abs/2012.04573v1
- Date: Tue, 8 Dec 2020 17:18:16 GMT
- Title: Estimation of the Mean Function of Functional Data via Deep Neural
Networks
- Authors: Shuoyang Wang, Guanqun Cao, Zuofeng Shang
- Abstract summary: We propose a deep neural network method to perform nonparametric regression for functional data.
The proposed method is applied to analyze positron emission tomography images of patients with Alzheimer disease.
- Score: 6.230751621285321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a deep neural network method to perform
nonparametric regression for functional data. The proposed estimators are based
on sparsely connected deep neural networks with ReLU activation function. By
properly choosing network architecture, our estimator achieves the optimal
nonparametric convergence rate in empirical norm. Under certain circumstances
such as trigonometric polynomial kernel and a sufficiently large sampling
frequency, the convergence rate is even faster than root-$n$ rate. Through
Monte Carlo simulation studies we examine the finite-sample performance of the
proposed method. Finally, the proposed method is applied to analyze positron
emission tomography images of patients with Alzheimer disease obtained from the
Alzheimer Disease Neuroimaging Initiative database.
Related papers
- PINQI: An End-to-End Physics-Informed Approach to Learned Quantitative MRI Reconstruction [0.7199733380797579]
Quantitative Magnetic Resonance Imaging (qMRI) enables the reproducible measurement of biophysical parameters in tissue.
The challenge lies in solving a nonlinear, ill-posed inverse problem to obtain desired tissue parameter maps from acquired raw data.
We propose PINQI, a novel qMRI reconstruction method that integrates the knowledge about the signal, acquisition model, and learned regularization into a single end-to-end trainable neural network.
arXiv Detail & Related papers (2023-06-19T15:37:53Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Spherical convolutional neural networks can improve brain microstructure
estimation from diffusion MRI data [0.35998666903987897]
Diffusion magnetic resonance imaging is sensitive to the microstructural properties of brain tissue.
Estimate clinically and scientifically relevant microstructural properties from the measured signals remains a highly challenging inverse problem that machine learning may help solve.
We trained a spherical convolutional neural network to predict the ground-truth parameter values from efficiently simulated noisy data.
arXiv Detail & Related papers (2022-11-17T20:52:00Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Robust Deep Neural Network Estimation for Multi-dimensional Functional
Data [0.22843885788439797]
We propose a robust estimator for the location function from multi-dimensional functional data.
The proposed estimators are based on the deep neural networks with ReLU activation function.
The proposed method is also applied to analyze 2D and 3D images of patients with Alzheimer's disease.
arXiv Detail & Related papers (2022-05-19T14:53:33Z) - Neuron-based Pruning of Deep Neural Networks with Better Generalization
using Kronecker Factored Curvature Approximation [18.224344440110862]
The proposed algorithm directs the parameters of the compressed model toward a flatter solution by exploring the spectral radius of Hessian.
Our result shows that it improves the state-of-the-art results on neuron compression.
The method is able to achieve very small networks with small accuracy across different neural network models.
arXiv Detail & Related papers (2021-11-16T15:55:59Z) - Measurement error models: from nonparametric methods to deep neural
networks [3.1798318618973362]
We propose an efficient neural network design for estimating measurement error models.
We use a fully connected feed-forward neural network to approximate the regression function $f(x)$.
We conduct an extensive numerical study to compare the neural network approach with classical nonparametric methods.
arXiv Detail & Related papers (2020-07-15T06:05:37Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.