On the universality of neural encodings in CNNs
- URL: http://arxiv.org/abs/2409.19460v1
- Date: Sat, 28 Sep 2024 21:30:25 GMT
- Title: On the universality of neural encodings in CNNs
- Authors: Florentin Guth, Brice Ménard,
- Abstract summary: We show that, for a range of layers of VGG-type networks, the learned eigenvectors appear to be universal across different natural image datasets.
They explain, at a more fundamental level, the success of transfer learning.
- Score: 5.064404027153094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore the universality of neural encodings in convolutional neural networks trained on image classification tasks. We develop a procedure to directly compare the learned weights rather than their representations. It is based on a factorization of spatial and channel dimensions and measures the similarity of aligned weight covariances. We show that, for a range of layers of VGG-type networks, the learned eigenvectors appear to be universal across different natural image datasets. Our results suggest the existence of a universal neural encoding for natural images. They explain, at a more fundamental level, the success of transfer learning. Our work shows that, instead of aiming at maximizing the performance of neural networks, one can alternatively attempt to maximize the universality of the learned encoding, in order to build a principled foundation model.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Understanding the Role of Pathways in a Deep Neural Network [4.456675543894722]
We analyze a convolutional neural network (CNN) trained in the classification task and present an algorithm to extract the diffusion pathways of individual pixels.
We find that the few largest pathways of an individual pixel from an image tend to cross the feature maps in each layer that is important for classification.
arXiv Detail & Related papers (2024-02-28T07:53:19Z) - Dynamics Based Neural Encoding with Inter-Intra Region Connectivity [2.3825930751052358]
We propose the first large-scale study focused on comparing video understanding models with respect to the visual cortex recordings using video stimuli.
We provide key insights on how video understanding models predict visual cortex responses.
We propose a novel neural encoding scheme that is built on top of the best performing video understanding models.
arXiv Detail & Related papers (2024-02-19T20:29:49Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Neural Knitworks: Patched Neural Implicit Representation Networks [1.0470286407954037]
We propose Knitwork, an architecture for neural implicit representation learning of natural images that achieves image synthesis.
To the best of our knowledge, this is the first implementation of a coordinate-based patch tailored for synthesis tasks such as image inpainting, super-resolution, and denoising.
The results show that modeling natural images using patches, rather than pixels, produces results of higher fidelity.
arXiv Detail & Related papers (2021-09-29T13:10:46Z) - Spatial Dependency Networks: Neural Layers for Improved Generative Image
Modeling [79.15521784128102]
We introduce a novel neural network for building image generators (decoders) and apply it to variational autoencoders (VAEs)
In our spatial dependency networks (SDNs), feature maps at each level of a deep neural net are computed in a spatially coherent way.
We show that augmenting the decoder of a hierarchical VAE by spatial dependency layers considerably improves density estimation.
arXiv Detail & Related papers (2021-03-16T07:01:08Z) - Global Adaptive Filtering Layer for Computer Vision [2.2758845733923687]
We devise a universal adaptive neural layer to "learn" optimal frequency filter for each image together with the weights of the base neural network that performs some computer vision task.
The proposed approach takes the source image in the spatial domain, automatically selects the best frequencies from the frequency domain, and transmits the inverse-transform image to the main neural network.
We observe that the light networks gain a noticeable boost in the performance metrics; whereas, the training of the heavy ones converges faster when our adaptive layer is allowed to "learn" alongside the main architecture.
arXiv Detail & Related papers (2020-10-02T19:43:49Z) - Neural Cellular Automata Manifold [84.08170531451006]
We show that the neural network architecture of the Neural Cellular Automata can be encapsulated in a larger NN.
This allows us to propose a new model that encodes a manifold of NCA, each of them capable of generating a distinct image.
In biological terms, our approach would play the role of the transcription factors, modulating the mapping of genes into specific proteins that drive cellular differentiation.
arXiv Detail & Related papers (2020-06-22T11:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.