Consistent feature selection for neural networks via Adaptive Group
Lasso
- URL: http://arxiv.org/abs/2006.00334v3
- Date: Fri, 3 Dec 2021 02:08:59 GMT
- Title: Consistent feature selection for neural networks via Adaptive Group
Lasso
- Authors: Vu Dinh, Lam Si Tung Ho
- Abstract summary: We propose and establish a theoretical guarantee for the use of the adaptive group for selecting important features of neural networks.
Specifically, we show that our feature selection method is consistent for single-output feed-forward neural networks with one hidden layer and hyperbolic tangent activation function.
- Score: 3.42658286826597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One main obstacle for the wide use of deep learning in medical and
engineering sciences is its interpretability. While neural network models are
strong tools for making predictions, they often provide little information
about which features play significant roles in influencing the prediction
accuracy. To overcome this issue, many regularization procedures for learning
with neural networks have been proposed for dropping non-significant features.
Unfortunately, the lack of theoretical results casts doubt on the applicability
of such pipelines. In this work, we propose and establish a theoretical
guarantee for the use of the adaptive group lasso for selecting important
features of neural networks. Specifically, we show that our feature selection
method is consistent for single-output feed-forward neural networks with one
hidden layer and hyperbolic tangent activation function. We demonstrate its
applicability using both simulation and data analysis.
Related papers
- Interpreting Neural Networks through Mahalanobis Distance [0.0]
This paper introduces a theoretical framework that connects neural network linear layers with the Mahalanobis distance.
Although this work is theoretical and does not include empirical data, the proposed distance-based interpretation has the potential to enhance model robustness, improve generalization, and provide more intuitive explanations of neural network decisions.
arXiv Detail & Related papers (2024-10-25T07:21:44Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Effective Subset Selection Through The Lens of Neural Network Pruning [31.43307762723943]
It is important to select the data to be annotated wisely, which is known as the subset selection problem.
We investigate the relationship between subset selection and neural network pruning, which is more widely studied.
We propose utilizing the norm criterion of neural network features to improve subset selection methods.
arXiv Detail & Related papers (2024-06-03T08:12:32Z) - Continual Learning via Sequential Function-Space Variational Inference [65.96686740015902]
We propose an objective derived by formulating continual learning as sequential function-space variational inference.
Compared to objectives that directly regularize neural network predictions, the proposed objective allows for more flexible variational distributions.
We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods.
arXiv Detail & Related papers (2023-12-28T18:44:32Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Stochastic Neural Networks with Infinite Width are Deterministic [7.07065078444922]
We study neural networks, a main type of neural network in use.
We prove that as the width of an optimized neural network tends to infinity, its predictive variance on the training set decreases to zero.
arXiv Detail & Related papers (2022-01-30T04:52:31Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Consistent Feature Selection for Analytic Deep Neural Networks [3.42658286826597]
We investigate the problem of feature selection for analytic deep networks.
We prove that for a wide class of networks, the Adaptive Group Lasso selection procedure with Group Lasso is selection-consistent.
The work provides further evidence that Group Lasso might be inefficient for feature selection with neural networks.
arXiv Detail & Related papers (2020-10-16T01:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.