AlignedCut: Visual Concepts Discovery on Brain-Guided Universal Feature Space
- URL: http://arxiv.org/abs/2406.18344v1
- Date: Wed, 26 Jun 2024 13:38:16 GMT
- Title: AlignedCut: Visual Concepts Discovery on Brain-Guided Universal Feature Space
- Authors: Huzheng Yang, James Gee, Jianbo Shi,
- Abstract summary: We study the intriguing connection between visual data, deep networks, and the brain.
Our method creates a universal channel alignment by using brain voxel fMRI response prediction as the training objective.
- Score: 9.302098067235507
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study the intriguing connection between visual data, deep networks, and the brain. Our method creates a universal channel alignment by using brain voxel fMRI response prediction as the training objective. We discover that deep networks, trained with different objectives, share common feature channels across various models. These channels can be clustered into recurring sets, corresponding to distinct brain regions, indicating the formation of visual concepts. Tracing the clusters of channel responses onto the images, we see semantically meaningful object segments emerge, even without any supervised decoder. Furthermore, the universal feature alignment and the clustering of channels produce a picture and quantification of how visual information is processed through the different network layers, which produces precise comparisons between the networks.
Related papers
- Learning Object-Centric Representation via Reverse Hierarchy Guidance [73.05170419085796]
Object-Centric Learning (OCL) seeks to enable Neural Networks to identify individual objects in visual scenes.
RHGNet introduces a top-down pathway that works in different ways in the training and inference processes.
Our model achieves SOTA performance on several commonly used datasets.
arXiv Detail & Related papers (2024-05-17T07:48:27Z) - Understanding the Role of Pathways in a Deep Neural Network [4.456675543894722]
We analyze a convolutional neural network (CNN) trained in the classification task and present an algorithm to extract the diffusion pathways of individual pixels.
We find that the few largest pathways of an individual pixel from an image tend to cross the feature maps in each layer that is important for classification.
arXiv Detail & Related papers (2024-02-28T07:53:19Z) - Brain Decodes Deep Nets [9.302098067235507]
We developed a tool for visualizing and analyzing large pre-trained vision models by mapping them onto the brain.
Our innovation arises from a surprising usage of brain encoding: predicting brain fMRI measurements in response to images.
arXiv Detail & Related papers (2023-12-03T04:36:04Z) - Squeeze aggregated excitation network [0.0]
Convolutional neural networks have spatial representations which read patterns in the vision tasks.
We propose SaEnet, Squeeze aggregated excitation network, for learning global channelwise representation in between layers.
arXiv Detail & Related papers (2023-08-25T12:30:48Z) - Efficient Multi-Scale Attention Module with Cross-Spatial Learning [4.046170185945849]
A novel efficient multi-scale attention (EMA) module is proposed.
We focus on retaining the information on per channel and decreasing the computational overhead.
We conduct extensive ablation studies and experiments on image classification and object detection tasks.
arXiv Detail & Related papers (2023-05-23T00:35:47Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Channel redundancy and overlap in convolutional neural networks with
channel-wise NNK graphs [36.479195100553085]
Feature spaces in the deep layers of convolutional neural networks (CNNs) are often very high-dimensional and difficult to interpret.
We analyze theoretically channel-wise non-negative kernel (CW-NNK) regression graphs to quantify the overlap between channels.
We find that redundancy between channels is significant and varies with the layer depth and the level of regularization.
arXiv Detail & Related papers (2021-10-18T22:50:07Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Adaptive feature recombination and recalibration for semantic
segmentation with Fully Convolutional Networks [57.64866581615309]
We propose recombination of features and a spatially adaptive recalibration block that is adapted for semantic segmentation with Fully Convolutional Networks.
Results indicate that Recombination and Recalibration improve the results of a competitive baseline, and generalize across three different problems.
arXiv Detail & Related papers (2020-06-19T15:45:03Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z) - See More, Know More: Unsupervised Video Object Segmentation with
Co-Attention Siamese Networks [184.4379622593225]
We introduce a novel network, called CO-attention Siamese Network (COSNet), to address the unsupervised video object segmentation task.
We emphasize the importance of inherent correlation among video frames and incorporate a global co-attention mechanism.
We propose a unified and end-to-end trainable framework where different co-attention variants can be derived for mining the rich context within videos.
arXiv Detail & Related papers (2020-01-19T11:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.