Texture Interpolation for Probing Visual Perception
- URL: http://arxiv.org/abs/2006.03698v2
- Date: Thu, 22 Oct 2020 18:05:27 GMT
- Title: Texture Interpolation for Probing Visual Perception
- Authors: Jonathan Vacher, Aida Davila, Adam Kohn, Ruben Coen-Cagli
- Abstract summary: We show that distributions of deep convolutional neural network (CNN) activations of a texture are well described by elliptical distributions.
We then propose the natural geodesics arising with the optimal transport metric to interpolate between arbitrary textures.
Compared to other CNN-based approaches, our method appears to match more closely the geometry of texture perception.
- Score: 4.637185817866918
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Texture synthesis models are important tools for understanding visual
processing. In particular, statistical approaches based on neurally relevant
features have been instrumental in understanding aspects of visual perception
and of neural coding. New deep learning-based approaches further improve the
quality of synthetic textures. Yet, it is still unclear why deep texture
synthesis performs so well, and applications of this new framework to probe
visual perception are scarce. Here, we show that distributions of deep
convolutional neural network (CNN) activations of a texture are well described
by elliptical distributions and therefore, following optimal transport theory,
constraining their mean and covariance is sufficient to generate new texture
samples. Then, we propose the natural geodesics (ie the shortest path between
two points) arising with the optimal transport metric to interpolate between
arbitrary textures. Compared to other CNN-based approaches, our interpolation
method appears to match more closely the geometry of texture perception, and
our mathematical framework is better suited to study its statistical nature. We
apply our method by measuring the perceptual scale associated to the
interpolation parameter in human observers, and the neural sensitivity of
different areas of visual cortex in macaque monkeys.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Neural Texture Puppeteer: A Framework for Neural Geometry and Texture
Rendering of Articulated Shapes, Enabling Re-Identification at Interactive
Speed [2.8544822698499255]
We present a neural rendering pipeline for textured articulated shapes that we call Neural Texture Puppeteer.
A texture auto-encoder makes use of this information to encode textured images into a global latent code.
Our method can be applied to endangered species where data is limited.
arXiv Detail & Related papers (2023-11-28T10:51:05Z) - Neural Textured Deformable Meshes for Robust Analysis-by-Synthesis [17.920305227880245]
Our paper formulates triple vision tasks in a consistent manner using approximate analysis-by-synthesis.
We show that our analysis-by-synthesis is much more robust than conventional neural networks when evaluated on real-world images.
arXiv Detail & Related papers (2023-05-31T18:45:02Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - NITES: A Non-Parametric Interpretable Texture Synthesis Method [41.13585191073405]
A non-parametric interpretable texture synthesis method, called the NITES method, is proposed in this work.
NITES is mathematically transparent and efficient in training and inference.
arXiv Detail & Related papers (2020-09-02T22:52:44Z) - A Generative Model for Texture Synthesis based on Optimal Transport
between Feature Distributions [8.102785819558978]
We show how to use our framework to learn a feed-forward neural network that can synthesize on-the-fly new textures of arbitrary size.
We show how to use our framework to learn a feed-forward neural network that can synthesize on-the-fly new textures of arbitrary size in a very fast manner.
arXiv Detail & Related papers (2020-06-19T13:32:55Z) - Co-occurrence Based Texture Synthesis [25.4878061402506]
We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images.
We show that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis.
arXiv Detail & Related papers (2020-05-17T08:01:44Z) - Towards Analysis-friendly Face Representation with Scalable Feature and
Texture Compression [113.30411004622508]
We show that a universal and collaborative visual information representation can be achieved in a hierarchical way.
Based on the strong generative capability of deep neural networks, the gap between the base feature layer and enhancement layer is further filled with the feature level texture reconstruction.
To improve the efficiency of the proposed framework, the base layer neural network is trained in a multi-task manner.
arXiv Detail & Related papers (2020-04-21T14:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.