Two-Dimensional Deep ReLU CNN Approximation for Korobov Functions: A Constructive Approach
- URL: http://arxiv.org/abs/2503.07976v1
- Date: Tue, 11 Mar 2025 02:15:09 GMT
- Title: Two-Dimensional Deep ReLU CNN Approximation for Korobov Functions: A Constructive Approach
- Authors: Qin Fang, Lei Shi, Min Xu, Ding-Xuan Zhou,
- Abstract summary: This paper investigates approximation capabilities of two-dimensional (2D) deep convolutional neural networks (CNNs)<n>We focus on 2D CNNs, comprising multi-channel convolutional layers with zero-padding and ReLU activations, followed by a fully connected layer.<n>We propose a fully constructive approach for building 2D CNNs to approximate Korobov functions and provide rigorous analysis of the complexity of the constructed networks.
- Score: 13.218398833013293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates approximation capabilities of two-dimensional (2D) deep convolutional neural networks (CNNs), with Korobov functions serving as a benchmark. We focus on 2D CNNs, comprising multi-channel convolutional layers with zero-padding and ReLU activations, followed by a fully connected layer. We propose a fully constructive approach for building 2D CNNs to approximate Korobov functions and provide rigorous analysis of the complexity of the constructed networks. Our results demonstrate that 2D CNNs achieve near-optimal approximation rates under the continuous weight selection model, significantly alleviating the curse of dimensionality. This work provides a solid theoretical foundation for 2D CNNs and illustrates their potential for broader applications in function approximation.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - The R2D2 deep neural network series paradigm for fast precision imaging in radio astronomy [1.7249361224827533]
Recent image reconstruction techniques have remarkable capability for imaging precision, well beyond CLEAN's capability.
We introduce a novel deep learning approach, dubbed "Residual-to-Residual DNN series for high-Dynamic range imaging"
R2D2's capability to deliver high precision is demonstrated in simulation, across a variety image observation settings using the Very Large Array (VLA)
arXiv Detail & Related papers (2024-03-08T16:57:54Z) - Dynamic 3D Point Cloud Sequences as 2D Videos [81.46246338686478]
3D point cloud sequences serve as one of the most common and practical representation modalities of real-world environments.
We propose a novel generic representation called textitStructured Point Cloud Videos (SPCVs)
SPCVs re-organizes a point cloud sequence as a 2D video with spatial smoothness and temporal consistency, where the pixel values correspond to the 3D coordinates of points.
arXiv Detail & Related papers (2024-03-02T08:18:57Z) - Approximation analysis of CNNs from a feature extraction view [8.94250977764275]
We establish some analysis for linear feature extraction by a deep multi-channel convolutional neural networks (CNNs)
We give an exact construction presenting how linear features extraction can be conducted efficiently with multi-channel CNNs.
Rates of function approximation by such deep networks implemented with channels and followed by fully-connected layers are investigated as well.
arXiv Detail & Related papers (2022-10-14T04:09:01Z) - Continuous approximation by convolutional neural networks with a
sigmoidal function [0.0]
We present a class of convolutional neural networks (CNNs) called non-overlapping CNNs.
We prove that such networks with sigmoidal activation function are capable of approximating arbitrary continuous function defined on compact input sets with any desired degree of accuracy.
arXiv Detail & Related papers (2022-09-27T12:31:36Z) - What Can Be Learnt With Wide Convolutional Neural Networks? [69.55323565255631]
We study infinitely-wide deep CNNs in the kernel regime.
We prove that deep CNNs adapt to the spatial scale of the target function.
We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN.
arXiv Detail & Related papers (2022-08-01T17:19:32Z) - Towards a General Purpose CNN for Long Range Dependencies in
$\mathrm{N}$D [49.57261544331683]
We propose a single CNN architecture equipped with continuous convolutional kernels for tasks on arbitrary resolution, dimensionality and length without structural changes.
We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$mathrmD$) and visual data (2$mathrmD$)
Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
arXiv Detail & Related papers (2022-06-07T15:48:02Z) - Application of 2-D Convolutional Neural Networks for Damage Detection in
Steel Frame Structures [0.0]
We present an application of 2-D convolutional neural networks (2-D CNNs) designed to perform both feature extraction and classification stages.
The method uses a network of lighted CNNs instead of deep and takes raw acceleration signals as input.
arXiv Detail & Related papers (2021-10-29T16:29:31Z) - MSDPN: Monocular Depth Prediction with Partial Laser Observation using
Multi-stage Neural Networks [1.1602089225841632]
We propose a deep-learning-based multi-stage network architecture called Multi-Stage Depth Prediction Network (MSDPN)
MSDPN is proposed to predict a dense depth map using a 2D LiDAR and a monocular camera.
As verified experimentally, our network yields promising performance against state-of-the-art methods.
arXiv Detail & Related papers (2020-08-04T08:27:40Z) - Implicit Convex Regularizers of CNN Architectures: Convex Optimization
of Two- and Three-Layer Networks in Polynomial Time [70.15611146583068]
We study training of Convolutional Neural Networks (CNNs) with ReLU activations.
We introduce exact convex optimization with a complexity with respect to the number of data samples, the number of neurons, and data dimension.
arXiv Detail & Related papers (2020-06-26T04:47:20Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.