The Cooperative Network Architecture: Learning Structured Networks as Representation of Sensory Patterns
- URL: http://arxiv.org/abs/2407.05650v2
- Date: Wed, 04 Dec 2024 11:12:42 GMT
- Title: The Cooperative Network Architecture: Learning Structured Networks as Representation of Sensory Patterns
- Authors: Pascal J. Sager, Jan M. Deriu, Benjamin F. Grewe, Thilo Stadelmann, Christoph von der Malsburg,
- Abstract summary: We present the cooperative network architecture (CNA), a model that learns such net structure to represent input patterns and deals robustly with noise, deformation, and out-of-distribution data.
- Score: 3.9848584845601014
- License:
- Abstract: Nets, cooperative networks of neurons, have been proposed as format for the representation of sensory signals, as physical implementation of the Gestalt phenomenon and as solution to the neural binding problem, while the direct interaction between nets by structure-sensitive matching has been proposed as basis for object-global operations such as object detection. The nets are flexibly composed of overlapping net fragments, which are learned from statistical regularities of sensory input. We here present the cooperative network architecture (CNA), a concrete model that learns such net structure to represent input patterns and deals robustly with noise, deformation, and out-of-distribution data, thus laying the groundwork for a novel neural architecture.
Related papers
- Continually Learning Structured Visual Representations via Network Refinement with Rerelation [15.376349115976534]
Current machine learning paradigm relies on continuous representations like neural networks, which iteratively adjust parameters to approximate outcomes.
We propose a method that learns visual space in a structured, continual manner.
arXiv Detail & Related papers (2025-02-19T18:18:27Z) - Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Identifying Sub-networks in Neural Networks via Functionally Similar Representations [41.028797971427124]
We take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks.
Specifically, we explore a novel automated and task-agnostic approach based on the notion of functionally similar representations within neural networks.
We show the proposed approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.
arXiv Detail & Related papers (2024-10-21T20:19:00Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - Investigating the Compositional Structure Of Deep Neural Networks [1.8899300124593645]
We introduce a novel theoretical framework based on the compositional structure of piecewise linear activation functions.
It is possible to characterize the instances of the input data with respect to both the predicted label and the specific (linear) transformation used to perform predictions.
Preliminary tests on the MNIST dataset show that our method can group input instances with regard to their similarity in the internal representation of the neural network.
arXiv Detail & Related papers (2020-02-17T14:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.