Robust multimodal models have outlier features and encode more concepts
- URL: http://arxiv.org/abs/2310.13040v1
- Date: Thu, 19 Oct 2023 17:59:12 GMT
- Title: Robust multimodal models have outlier features and encode more concepts
- Authors: Jonathan Crabb\'e, Pau Rodr\'iguez, Vaishaal Shankar, Luca Zappella,
Arno Blaas
- Abstract summary: We probe the representation spaces of 12 robust multimodal models with various backbones and pretraining sets.
We find two signatures of robustness in the representation spaces of these models.
These insights pave the way for future research in various fields, such as model pruning and mechanistic interpretability.
- Score: 14.555055710021715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: What distinguishes robust models from non-robust ones? This question has
gained traction with the appearance of large-scale multimodal models, such as
CLIP. These models have demonstrated unprecedented robustness with respect to
natural distribution shifts. While it has been shown that such differences in
robustness can be traced back to differences in training data, so far it is not
known what that translates to in terms of what the model has learned. In this
work, we bridge this gap by probing the representation spaces of 12 robust
multimodal models with various backbones (ResNets and ViTs) and pretraining
sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp). We find two
signatures of robustness in the representation spaces of these models: (1)
Robust models exhibit outlier features characterized by their activations, with
some being several orders of magnitude above average. These outlier features
induce privileged directions in the model's representation space. We
demonstrate that these privileged directions explain most of the predictive
power of the model by pruning up to $80 \%$ of the least important
representation space directions without negative impacts on model accuracy and
robustness; (2) Robust models encode substantially more concepts in their
representation space. While this superposition of concepts allows robust models
to store much information, it also results in highly polysemantic features,
which makes their interpretation challenging. We discuss how these insights
pave the way for future research in various fields, such as model pruning and
mechanistic interpretability.
Related papers
- FreeSeg-Diff: Training-Free Open-Vocabulary Segmentation with Diffusion Models [56.71672127740099]
We focus on the task of image segmentation, which is traditionally solved by training models on closed-vocabulary datasets.
We leverage different and relatively small-sized, open-source foundation models for zero-shot open-vocabulary segmentation.
Our approach (dubbed FreeSeg-Diff), which does not rely on any training, outperforms many training-based approaches on both Pascal VOC and COCO datasets.
arXiv Detail & Related papers (2024-03-29T10:38:25Z) - ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object [78.58860252442045]
We introduce generative model as a data source for hard images that benchmark deep models' robustness.
We are able to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term this benchmark as ImageNet-D.
Our work suggests that diffusion models can be an effective source to test vision models.
arXiv Detail & Related papers (2024-03-27T17:23:39Z) - Raising the Bar of AI-generated Image Detection with CLIP [50.345365081177555]
The aim of this work is to explore the potential of pre-trained vision-language models (VLMs) for universal detection of AI-generated images.
We develop a lightweight detection strategy based on CLIP features and study its performance in a wide variety of challenging scenarios.
arXiv Detail & Related papers (2023-11-30T21:11:20Z) - ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy [27.75360812109922]
In this work, we conduct an in-depth comparative analysis of model behaviors beyond ImageNet accuracy.
Although our selected models have similar ImageNet accuracies and compute requirements, we find that they differ in many other aspects.
This diversity in model characteristics, not captured by traditional metrics, highlights the need for more nuanced analysis.
arXiv Detail & Related papers (2023-11-15T18:56:51Z) - With a Little Help from your own Past: Prototypical Memory Networks for
Image Captioning [47.96387857237473]
We devise a network which can perform attention over activations obtained while processing other training samples.
Our memory models the distribution of past keys and values through the definition of prototype vectors.
We demonstrate that our proposal can increase the performance of an encoder-decoder Transformer by 3.7 CIDEr points both when training in cross-entropy only and when fine-tuning with self-critical sequence training.
arXiv Detail & Related papers (2023-08-23T18:53:00Z) - ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing [45.14977000707886]
Higher accuracy on ImageNet usually leads to better robustness against different corruptions.
We create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions.
We evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers.
arXiv Detail & Related papers (2023-03-30T02:02:32Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - The Role of ImageNet Classes in Fr\'echet Inception Distance [33.47601032254247]
Inception Distance (FID) is a metric for quantifying the distance between two distributions of images.
We observe that FID is essentially a distance between sets of ImageNet class probabilities.
Our results suggest caution against over-interpreting FID improvements, and underline the need for distribution metrics that are more perceptually uniform.
arXiv Detail & Related papers (2022-03-11T15:50:06Z) - Vision Models Are More Robust And Fair When Pretrained On Uncurated
Images Without Supervision [38.22842778742829]
Discriminative self-supervised learning allows training models on any random group of internet images.
We train models on billions of random images without any data pre-processing or prior assumptions about what we want the model to learn.
We extensively study and validate our model performance on over 50 benchmarks including fairness, to distribution shift, geographical diversity, fine grained recognition, image copy detection and many image classification datasets.
arXiv Detail & Related papers (2022-02-16T22:26:47Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.