Simplicity in Complexity : Explaining Visual Complexity using Deep Segmentation Models
- URL: http://arxiv.org/abs/2403.03134v3
- Date: Mon, 6 May 2024 12:24:58 GMT
- Title: Simplicity in Complexity : Explaining Visual Complexity using Deep Segmentation Models
- Authors: Tingke Shen, Surabhi S Nath, Aenne Brielmann, Peter Dayan,
- Abstract summary: We propose to model complexity using segment-based representations of images.
We find that complexity is well-explained by a simple linear model with these two features across six diverse image-sets.
- Score: 6.324765782436764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The complexity of visual stimuli plays an important role in many cognitive phenomena, including attention, engagement, memorability, time perception and aesthetic evaluation. Despite its importance, complexity is poorly understood and ironically, previous models of image complexity have been quite complex. There have been many attempts to find handcrafted features that explain complexity, but these features are usually dataset specific, and hence fail to generalise. On the other hand, more recent work has employed deep neural networks to predict complexity, but these models remain difficult to interpret, and do not guide a theoretical understanding of the problem. Here we propose to model complexity using segment-based representations of images. We use state-of-the-art segmentation models, SAM and FC-CLIP, to quantify the number of segments at multiple granularities, and the number of classes in an image respectively. We find that complexity is well-explained by a simple linear model with these two features across six diverse image-sets of naturalistic scene and art images. This suggests that the complexity of images can be surprisingly simple.
Related papers
- Multi-scale structural complexity as a quantitative measure of visual complexity [1.3499500088995464]
We suggest adopting the multi-scale structural complexity (MSSC) measure, an approach that defines structural complexity of an object as the amount of dissimilarities between distinct scales in its hierarchical organization.
We demonstrate that MSSC correlates with subjective complexity on par with other computational complexity measures, while being more intuitive by definition, consistent across categories of images, and easier to compute.
arXiv Detail & Related papers (2024-08-07T20:26:35Z) - Understanding Visual Feature Reliance through the Lens of Complexity [14.282243225622093]
We introduce a new metric for quantifying feature complexity, based on $mathscrV$-information.
We analyze the complexities of 10,000 features, represented as directions in the penultimate layer, that were extracted from a standard ImageNet-trained vision model.
arXiv Detail & Related papers (2024-07-08T16:21:53Z) - What makes Models Compositional? A Theoretical View: With Supplement [60.284698521569936]
We propose a general neuro-symbolic definition of compositional functions and their compositional complexity.
We show how various existing general and special purpose sequence processing models fit this definition and use it to analyze their compositional complexity.
arXiv Detail & Related papers (2024-05-02T20:10:27Z) - Inferring Local Structure from Pairwise Correlations [0.0]
We show that pairwise correlations provide enough information to recover local relations.
This proves to be successful even though higher order interaction structures are present in our data.
arXiv Detail & Related papers (2023-05-07T22:38:29Z) - The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning [80.1018596899899]
We argue that neural network models share this same preference, formalized using Kolmogorov complexity.
Our experiments show that pre-trained and even randomly language models prefer to generate low-complexity sequences.
These observations justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
arXiv Detail & Related papers (2023-04-11T17:22:22Z) - On the Complexity of Bayesian Generalization [141.21610899086392]
We consider concept generalization at a large scale in the diverse and natural visual spectrum.
We study two modes when the problem space scales up, and the $complexity$ of concepts becomes diverse.
arXiv Detail & Related papers (2022-11-20T17:21:37Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Model Complexity of Deep Learning: A Survey [79.20117679251766]
We conduct a systematic overview of the latest studies on model complexity in deep learning.
We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process and data complexity.
arXiv Detail & Related papers (2021-03-08T22:39:32Z) - Simplicial Complex Representation Learning [0.7734726150561088]
Simplicial complexes form an important class of topological spaces that are frequently used in computer-aided design, computer graphics, and simulation.
In this work, we propose a method for simplicial complex-level representation learning that embeds a simplicial complex to a universal embedding space.
Our method utilizes a simplex-level embedding induced by a pre-trained simplicial autoencoder to learn an entire simplicial complex representation.
arXiv Detail & Related papers (2021-03-06T06:33:04Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.