A Multi-Resolution Benchmark Framework for Spatial Reasoning Assessment in Neural Networks
- URL: http://arxiv.org/abs/2508.12741v1
- Date: Mon, 18 Aug 2025 09:04:13 GMT
- Title: A Multi-Resolution Benchmark Framework for Spatial Reasoning Assessment in Neural Networks
- Authors: Manuela Imbriani, Gina Belmonte, Mieke Massink, Alessandro Tofani, Vincenzo Ciancia,
- Abstract summary: This paper presents preliminary results in the definition of a comprehensive benchmark framework designed to evaluate spatial reasoning capabilities in neural networks.<n>The framework is currently being used to study the capabilities of nnU-Net.
- Score: 40.69150418258213
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents preliminary results in the definition of a comprehensive benchmark framework designed to systematically evaluate spatial reasoning capabilities in neural networks, with a particular focus on morphological properties such as connectivity and distance relationships. The framework is currently being used to study the capabilities of nnU-Net, exploiting the spatial model checker VoxLogicA to generate two distinct categories of synthetic datasets: maze connectivity problems for topological analysis and spatial distance computation tasks for geometric understanding. Each category is evaluated across multiple resolutions to assess scalability and generalization properties. The automated pipeline encompasses a complete machine learning workflow including: synthetic dataset generation, standardized training with cross-validation, inference execution, and comprehensive evaluation using Dice coefficient and IoU (Intersection over Union) metrics. Preliminary experimental results demonstrate significant challenges in neural network spatial reasoning capabilities, revealing systematic failures in basic geometric and topological understanding tasks. The framework provides a reproducible experimental protocol, enabling researchers to identify specific limitations. Such limitations could be addressed through hybrid approaches combining neural networks with symbolic reasoning methods for improved spatial understanding in clinical applications, establishing a foundation for ongoing research into neural network spatial reasoning limitations and potential solutions.
Related papers
- Statistical tuning of artificial neural network [0.0]
This study introduces methods to enhance the understanding of neural networks, focusing specifically on models with a single hidden layer.
We propose statistical tests to assess the significance of input neurons and introduce algorithms for dimensionality reduction.
This research advances the field of Explainable Artificial Intelligence by presenting robust statistical frameworks for interpreting neural networks.
arXiv Detail & Related papers (2024-09-24T19:47:03Z) - Steinmetz Neural Networks for Complex-Valued Data [23.80312814400945]
We introduce a new approach to processing complex-valued data using DNNs consisting of parallel real-valuedworks with coupled outputs.<n>Our proposed class of architectures, referred to as Steinmetz Neural Networks, incorporates multi-view learning to construct more interpretable representations in the latent space.<n>Our numerical experiments depict the improved performance and robustness to additive noise, afforded by our proposed networks on benchmark datasets and synthetic examples.
arXiv Detail & Related papers (2024-09-16T08:26:06Z) - Nonlinear classification of neural manifolds with contextual information [6.292933471495322]
We introduce a theoretical framework that leverages latent directions in input space, which can be related to contextual information.<n>We derive an exact formula for the context-dependent manifold capacity that depends on manifold geometry and context correlations.<n>Our framework's increased expressivity captures representation reformatting in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis.
arXiv Detail & Related papers (2024-05-10T23:37:31Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Generalized Shape Metrics on Neural Representations [26.78835065137714]
We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
arXiv Detail & Related papers (2021-10-27T19:48:55Z) - Inter-layer Information Similarity Assessment of Deep Neural Networks
Via Topological Similarity and Persistence Analysis of Data Neighbour
Dynamics [93.4221402881609]
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
Inspired by both LS and ID strategies for quantitative information structure analysis, we introduce two novel complimentary methods for inter-layer information similarity assessment.
We demonstrate their efficacy in this study by performing analysis on a deep convolutional neural network architecture on image data.
arXiv Detail & Related papers (2020-12-07T15:34:58Z) - Towards Interaction Detection Using Topological Analysis on Neural
Networks [55.74562391439507]
In neural networks, any interacting features must follow a strongly weighted connection to common hidden units.
We propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology.
A Persistence Interaction detection(PID) algorithm is developed to efficiently detect interactions.
arXiv Detail & Related papers (2020-10-25T02:15:24Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.