Object-based Probabilistic Similarity Evidence of Sparse Latent Features
from Fully Convolutional Networks
- URL: http://arxiv.org/abs/2307.13606v1
- Date: Tue, 25 Jul 2023 16:15:29 GMT
- Title: Object-based Probabilistic Similarity Evidence of Sparse Latent Features
from Fully Convolutional Networks
- Authors: Cyril Juliani
- Abstract summary: Similarity analysis using neural networks has emerged as a powerful technique for understanding and categorizing complex patterns in various domains.
This research explores the utilization of latent information generated by fully convolutional networks (FCNs) in similarity analysis.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Similarity analysis using neural networks has emerged as a powerful technique
for understanding and categorizing complex patterns in various domains. By
leveraging the latent representations learned by neural networks, data objects
such as images can be compared effectively. This research explores the
utilization of latent information generated by fully convolutional networks
(FCNs) in similarity analysis, notably to estimate the visual resemblance of
objects segmented in 2D pictures. To do this, the analytical scheme comprises
two steps: (1) extracting and transforming feature patterns per 2D object from
a trained FCN, and (2) identifying the most similar patterns through fuzzy
inference. The step (2) can be further enhanced by incorporating a weighting
scheme that considers the significance of latent variables in the analysis. The
results provide valuable insights into the benefits and challenges of employing
neural network-based similarity analysis for discerning data patterns
effectively.
Related papers
- Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - VisGraphNet: a complex network interpretation of convolutional neural
features [6.50413414010073]
We propose and investigate the use of visibility graphs to model the feature map of a neural network.
The work is motivated by an alternative viewpoint provided by these graphs over the original data.
arXiv Detail & Related papers (2021-08-27T20:21:04Z) - SI-Score: An image dataset for fine-grained analysis of robustness to
object location, rotation and size [95.00667357120442]
Changing the object location, rotation and size may affect the predictions in non-trivial ways.
We perform a fine-grained analysis of robustness with respect to these factors of variation using SI-Score, a synthetic dataset.
arXiv Detail & Related papers (2021-04-09T05:00:49Z) - SOSD-Net: Joint Semantic Object Segmentation and Depth Estimation from
Monocular images [94.36401543589523]
We introduce the concept of semantic objectness to exploit the geometric relationship of these two tasks.
We then propose a Semantic Object and Depth Estimation Network (SOSD-Net) based on the objectness assumption.
To the best of our knowledge, SOSD-Net is the first network that exploits the geometry constraint for simultaneous monocular depth estimation and semantic segmentation.
arXiv Detail & Related papers (2021-01-19T02:41:03Z) - Probabilistic Graph Attention Network with Conditional Kernels for
Pixel-Wise Prediction [158.88345945211185]
We present a novel approach that advances the state of the art on pixel-level prediction in a fundamental aspect, i.e. structured multi-scale features learning and fusion.
We propose a probabilistic graph attention network structure based on a novel Attention-Gated Conditional Random Fields (AG-CRFs) model for learning and fusing multi-scale representations in a principled manner.
arXiv Detail & Related papers (2021-01-08T04:14:29Z) - Inter-layer Information Similarity Assessment of Deep Neural Networks
Via Topological Similarity and Persistence Analysis of Data Neighbour
Dynamics [93.4221402881609]
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
Inspired by both LS and ID strategies for quantitative information structure analysis, we introduce two novel complimentary methods for inter-layer information similarity assessment.
We demonstrate their efficacy in this study by performing analysis on a deep convolutional neural network architecture on image data.
arXiv Detail & Related papers (2020-12-07T15:34:58Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Complexity for deep neural networks and other characteristics of deep
feature representations [0.0]
We define a notion of complexity, which quantifies the nonlinearity of the computation of a neural network.
We investigate these observables both for trained networks as well as explore their dynamics during training.
arXiv Detail & Related papers (2020-06-08T17:59:30Z) - Network Comparison with Interpretable Contrastive Network Representation
Learning [44.145644586950574]
We introduce a new analysis approach called contrastive network representation learning (cNRL)
cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another.
We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets.
arXiv Detail & Related papers (2020-05-25T21:46:59Z) - Similarity of Neural Networks with Gradients [8.804507286438781]
We propose to leverage both feature vectors and gradient ones into designing the representation of a neural network.
We show that the proposed approach provides a state-of-the-art method for computing similarity of neural networks.
arXiv Detail & Related papers (2020-03-25T17:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.