Feature Network Methods in Machine Learning and Applications
- URL: http://arxiv.org/abs/2401.04874v1
- Date: Wed, 10 Jan 2024 01:57:12 GMT
- Title: Feature Network Methods in Machine Learning and Applications
- Authors: Xinying Mu, Mark Kon
- Abstract summary: A machine learning (ML) feature network is a graph that connects ML features in learning tasks based on their similarity.
We provide an example of a deep tree-structured feature network, where hierarchical connections are formed through feature clustering and feed-forward learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A machine learning (ML) feature network is a graph that connects ML features
in learning tasks based on their similarity. This network representation allows
us to view feature vectors as functions on the network. By leveraging function
operations from Fourier analysis and from functional analysis, one can easily
generate new and novel features, making use of the graph structure imposed on
the feature vectors. Such network structures have previously been studied
implicitly in image processing and computational biology. We thus describe
feature networks as graph structures imposed on feature vectors, and provide
applications in machine learning. One application involves graph-based
generalizations of convolutional neural networks, involving structured deep
learning with hierarchical representations of features that have varying depth
or complexity. This extends also to learning algorithms that are able to
generate useful new multilevel features. Additionally, we discuss the use of
feature networks to engineer new features, which can enhance the expressiveness
of the model. We give a specific example of a deep tree-structured feature
network, where hierarchical connections are formed through feature clustering
and feed-forward learning. This results in low learning complexity and
computational efficiency. Unlike "standard" neural features which are limited
to modulated (thresholded) linear combinations of adjacent ones, feature
networks offer more general feedforward dependencies among features. For
example, radial basis functions or graph structure-based dependencies between
features can be utilized.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Going Beyond Neural Network Feature Similarity: The Network Feature
Complexity and Its Interpretation Using Category Theory [64.06519549649495]
We provide the definition of what we call functionally equivalent features.
These features produce equivalent output under certain transformations.
We propose an efficient algorithm named Iterative Feature Merging.
arXiv Detail & Related papers (2023-10-10T16:27:12Z) - Joint Feature and Differentiable $ k $-NN Graph Learning using Dirichlet
Energy [103.74640329539389]
We propose a deep FS method that simultaneously conducts feature selection and differentiable $ k $-NN graph learning.
We employ Optimal Transport theory to address the non-differentiability issue of learning $ k $-NN graphs in neural networks.
We validate the effectiveness of our model with extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-21T08:15:55Z) - Learning Dynamics and Structure of Complex Systems Using Graph Neural
Networks [13.509027957413409]
We trained graph neural networks to fit time series from an example nonlinear dynamical system.
We found simple interpretations of the learned representation and model components.
We successfully identified a graph translator' between the statistical interactions in belief propagation and parameters of the corresponding trained network.
arXiv Detail & Related papers (2022-02-22T15:58:16Z) - The staircase property: How hierarchical structure can guide deep
learning [38.713566366330326]
This paper identifies a structural property of data distributions that enables deep neural networks to learn hierarchically.
We prove that functions satisfying this property can be learned in time using layerwise coordinate descent on regular neural networks.
arXiv Detail & Related papers (2021-08-24T08:19:05Z) - Optimal Approximation with Sparse Neural Networks and Applications [0.0]
We use deep sparsely connected neural networks to measure the complexity of a function class in $L(mathbb Rd)$.
We also introduce representation system - a countable collection of functions to guide neural networks.
We then analyse the complexity of a class called $beta$ cartoon-like functions using rate-distortion theory and wedgelets construction.
arXiv Detail & Related papers (2021-08-14T05:14:13Z) - Toward Understanding the Feature Learning Process of Self-supervised
Contrastive Learning [43.504548777955854]
We study how contrastive learning learns the feature representations for neural networks by analyzing its feature learning process.
We prove that contrastive learning using textbfReLU networks provably learns the desired sparse features if proper augmentations are adopted.
arXiv Detail & Related papers (2021-05-31T16:42:09Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.