Pre-processing and Compression: Understanding Hidden Representation Refinement Across Imaging Domains via Intrinsic Dimension
- URL: http://arxiv.org/abs/2408.08381v4
- Date: Mon, 21 Oct 2024 15:18:39 GMT
- Title: Pre-processing and Compression: Understanding Hidden Representation Refinement Across Imaging Domains via Intrinsic Dimension
- Authors: Nicholas Konz, Maciej A. Mazurowski,
- Abstract summary: We show that medical image models peak in representation ID earlier in the network.
We also find a strong correlation of this peak representation ID with the ID of the data in its input space.
- Score: 1.7495213911983414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been interest in how geometric properties such as intrinsic dimension (ID) of a neural network's hidden representations change through its layers, and how such properties are predictive of important model behavior such as generalization ability. However, evidence has begun to emerge that such behavior can change significantly depending on the domain of the network's training data, such as natural versus medical images. Here, we further this inquiry by exploring how the ID of a network's learned representations changes through its layers, in essence, characterizing how the network successively refines the information content of input data to be used for predictions. Analyzing eleven natural and medical image datasets across six network architectures, we find that how ID changes through the network differs noticeably between natural and medical image models. Specifically, medical image models peak in representation ID earlier in the network, implying a difference in the image features and their abstractness that are typically used for downstream tasks in these domains. Additionally, we discover a strong correlation of this peak representation ID with the ID of the data in its input space, implying that the intrinsic information content of a model's learned representations is guided by that of the data it was trained on. Overall, our findings emphasize notable discrepancies in network behavior between natural and non-natural imaging domains regarding hidden representation information content, and provide further insights into how a network's learned features are shaped by its training data.
Related papers
- Cross-domain Variational Capsules for Information Extraction [0.0]
The intent was to identify prominent characteristics in data and use this identification mechanism to auto-generate insight from data in other unseen domains.
An information extraction algorithm is proposed which is a combination of Variational Autoencoders (VAEs) and Capsule Networks.
Noticing a dearth in the number of datasets that contain visible characteristics in images belonging to various domains, the Multi-domain Image Characteristics dataset was created and made publicly available.
arXiv Detail & Related papers (2022-10-13T20:04:36Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Transferring Knowledge with Attention Distillation for Multi-Domain
Image-to-Image Translation [28.272982411879845]
We show how gradient-based attentions can be used as knowledge to be conveyed in a teacher-student paradigm for image-to-image translation tasks.
It is also demonstrated how "pseudo"-attentions can also be employed during training when teacher and student networks are trained on different domains.
arXiv Detail & Related papers (2021-08-17T06:47:04Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Characterization and recognition of handwritten digits using Julia [0.0]
We implement a hybrid neural network model that is capable of recognizing the digit of MNISTdataset.
The proposed neural model network can extract features from the image and recognize the features in the layer by layer.
It also can recognize the auto-encoding system and the variational auto-encoding system of the MNIST dataset.
arXiv Detail & Related papers (2021-02-24T00:30:41Z) - An analysis of the transfer learning of convolutional neural networks
for artistic images [1.9336815376402716]
Transfer learning from huge natural image datasets has become de facto the core of art analysis applications.
In this paper, we first use techniques for visualizing the network internal representations in order to provide clues to the understanding of what the network has learned on artistic images.
We provide a quantitative analysis of the changes introduced by the learning process thanks to metrics in both the feature and parameter spaces, as well as metrics computed on the set of maximal activation images.
arXiv Detail & Related papers (2020-11-05T09:45:32Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Self-Supervised Discovery of Anatomical Shape Landmarks [5.693003993674883]
We propose a self-supervised, neural network approach for automatically positioning and detecting landmarks in images that can be used for subsequent analysis.
We present a complete framework, which only takes a set of input images and produces landmarks that are immediately usable for statistical shape analysis.
arXiv Detail & Related papers (2020-06-13T00:56:33Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.