Feature Qualification by Deep Nets: A Constructive Approach
- URL: http://arxiv.org/abs/2503.18676v1
- Date: Mon, 24 Mar 2025 13:48:17 GMT
- Title: Feature Qualification by Deep Nets: A Constructive Approach
- Authors: Feilong Cao, Shao-Bo Lin,
- Abstract summary: We build a linear deep net operator that possesses optimal approximation performance in approximating smooth and radial functions.<n>We provide theoretical evidences that the constructed deep net operator is capable of qualifying multiple features such as the smoothness and radialness of the target functions.
- Score: 19.474935486234166
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The great success of deep learning has stimulated avid research activities in verifying the power of depth in theory, a common consensus of which is that deep net are versatile in approximating and learning numerous functions. Such a versatility certainly enhances the understanding of the power of depth, but makes it difficult to judge which data features are crucial in a specific learning task. This paper proposes a constructive approach to equip deep nets for the feature qualification purpose. Using the product-gate nature and localized approximation property of deep nets with sigmoid activation (deep sigmoid nets), we succeed in constructing a linear deep net operator that possesses optimal approximation performance in approximating smooth and radial functions. Furthermore, we provide theoretical evidences that the constructed deep net operator is capable of qualifying multiple features such as the smoothness and radialness of the target functions.
Related papers
- The Computational Advantage of Depth: Learning High-Dimensional Hierarchical Functions with Gradient Descent [28.999394988111106]
We introduce a class of target functions that incorporate a hierarchy of latent subspace dimensionalities.<n>Our main theorem shows that feature learning with gradient descent reduces the effective dimensionality.<n>These findings open the way to further quantitative studies of the crucial role of depth in learning hierarchical structures with deep networks.
arXiv Detail & Related papers (2025-02-19T18:58:28Z) - Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We explore intersections between sparse coding and deep learning to enhance our understanding of feature extraction capabilities.<n>We derive convergence rates for convolutional neural networks (CNNs) in their ability to extract sparse features.<n>Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural Networks with Affine Optimal Transport [32.39176908225668]
We introduce the concept of the non-linearity signature of DNN, the first theoretically sound solution for measuring the non-linearity of deep neural networks.
We provide extensive experimental results that highlight the practical usefulness of the proposed non-linearity signature.
arXiv Detail & Related papers (2023-10-17T17:50:22Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Proto-Value Networks: Scaling Representation Learning with Auxiliary
Tasks [33.98624423578388]
Auxiliary tasks improve representations learned by deep reinforcement learning agents.
We derive a new family of auxiliary tasks based on the successor measure.
We show that proto-value networks produce rich features that may be used to obtain performance comparable to established algorithms.
arXiv Detail & Related papers (2023-04-25T04:25:08Z) - Interplay between depth of neural networks and locality of target
functions [5.33024001730262]
We report a remarkable interplay between depth and locality of a target function.
We find that depth is beneficial for learning local functions but detrimental to learning global functions.
arXiv Detail & Related papers (2022-01-28T12:41:24Z) - Localized Persistent Homologies for more Effective Deep Learning [60.78456721890412]
We introduce an approach that relies on a new filtration function to account for location during network training.
We demonstrate experimentally on 2D images of roads and 3D image stacks of neuronal processes that networks trained in this manner are better at recovering the topology of the curvilinear structures they extract.
arXiv Detail & Related papers (2021-10-12T19:28:39Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - On Approximation in Deep Convolutional Networks: a Kernel Perspective [12.284934135116515]
We study the success of deep convolutional networks on tasks involving high-dimensional data such as images or audio.
We study this theoretically and empirically through the lens of kernel methods, by considering multi-layer convolutional kernels.
We find that while expressive kernels operating on input patches are important at the first layer, simpler kernels can suffice in higher layers for good performance.
arXiv Detail & Related papers (2021-02-19T17:03:42Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Recursive Multi-model Complementary Deep Fusion forRobust Salient Object
Detection via Parallel Sub Networks [62.26677215668959]
Fully convolutional networks have shown outstanding performance in the salient object detection (SOD) field.
This paper proposes a wider'' network architecture which consists of parallel sub networks with totally different network architectures.
Experiments on several famous benchmarks clearly demonstrate the superior performance, good generalization, and powerful learning ability of the proposed wider framework.
arXiv Detail & Related papers (2020-08-07T10:39:11Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z) - Meta-Gradient Reinforcement Learning with an Objective Discovered Online [54.15180335046361]
We propose an algorithm based on meta-gradient descent that discovers its own objective, flexibly parameterised by a deep neural network.
Because the objective is discovered online, it can adapt to changes over time.
On the Atari Learning Environment, the meta-gradient algorithm adapts over time to learn with greater efficiency.
arXiv Detail & Related papers (2020-07-16T16:17:09Z) - Depth Selection for Deep ReLU Nets in Feature Extraction and
Generalization [22.696129751033983]
We show that implementing the classical empirical risk minimization on deep nets can achieve the optimal generalization performance for numerous learning tasks.
Our results are verified by a series of numerical experiments including toy simulations and a real application of earthquake seismic intensity prediction.
arXiv Detail & Related papers (2020-04-01T06:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.