Investigating the Gestalt Principle of Closure in Deep Convolutional Neural Networks
- URL: http://arxiv.org/abs/2411.00627v1
- Date: Fri, 01 Nov 2024 14:36:21 GMT
- Title: Investigating the Gestalt Principle of Closure in Deep Convolutional Neural Networks
- Authors: Yuyan Zhang, Derya Soydaner, Fatemeh Behrad, Lisa Koßmann, Johan Wagemans,
- Abstract summary: This study investigates the principle of closure in convolutional neural networks.
We conduct experiments using simple visual stimuli with progressively removed edge sections.
We evaluate well-known networks on their ability to classify incomplete polygons.
- Score: 4.406699323036466
- License:
- Abstract: Deep neural networks perform well in object recognition, but do they perceive objects like humans? This study investigates the Gestalt principle of closure in convolutional neural networks. We propose a protocol to identify closure and conduct experiments using simple visual stimuli with progressively removed edge sections. We evaluate well-known networks on their ability to classify incomplete polygons. Our findings reveal a performance degradation as the edge removal percentage increases, indicating that current models heavily rely on complete edge information for accurate classification. The data used in our study is available on Github.
Related papers
- Residual Random Neural Networks [0.0]
Single-layer feedforward neural network with random weights is a recurring motif in the neural networks literature.
We show that one can obtain good classification results even if the number of hidden neurons has the same order of magnitude as the dimensionality of the data samples.
arXiv Detail & Related papers (2024-10-25T22:00:11Z) - Finding Closure: A Closer Look at the Gestalt Law of Closure in Convolutional Neural Networks [4.406699323036466]
Closure is the ability to fill in gaps to perceive figures as complete wholes.
Recent studies have examined the Closure effect in neural networks.
We introduce well-curated datasets to test for Closure effects, including both modal and amodal completion.
Our comprehensive analysis reveals that VGG16 and DenseNet-121 exhibit the Closure effect, while other CNNs show variable results.
arXiv Detail & Related papers (2024-08-22T14:59:37Z) - Theoretical Characterization of How Neural Network Pruning Affects its
Generalization [131.1347309639727]
This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization.
It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero.
More surprisingly, the generalization bound gets better as the pruning fraction gets larger.
arXiv Detail & Related papers (2023-01-01T03:10:45Z) - Affordance detection with Dynamic-Tree Capsule Networks [5.847547503155588]
Affordance detection from visual input is a fundamental step in autonomous robotic manipulation.
We introduce the first affordance detection network based on dynamic tree-structured capsules for sparse 3D point clouds.
Our algorithm is superior to current affordance detection methods when faced with grasping previously unseen objects.
arXiv Detail & Related papers (2022-11-09T21:14:08Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Redundant representations help generalization in wide neural networks [71.38860635025907]
We study the last hidden layer representations of various state-of-the-art convolutional neural networks.
We find that if the last hidden representation is wide enough, its neurons tend to split into groups that carry identical information, and differ from each other only by statistically independent noise.
arXiv Detail & Related papers (2021-06-07T10:18:54Z) - Leveraging Sparse Linear Layers for Debuggable Deep Networks [86.94586860037049]
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.
The resulting sparse explanations can help to identify spurious correlations, explain misclassifications, and diagnose model biases in vision and language tasks.
arXiv Detail & Related papers (2021-05-11T08:15:25Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Revisiting Edge Detection in Convolutional Neural Networks [3.5281112495479245]
We show that edges cannot be represented properly in the first convolutional layer of a neural network.
We propose edge-detection units and show that they reduce performance loss and generate qualitatively different representations.
arXiv Detail & Related papers (2020-12-25T13:53:04Z) - A Deep Conditioning Treatment of Neural Networks [37.192369308257504]
We show that depth improves trainability of neural networks by improving the conditioning of certain kernel matrices of the input data.
We provide versions of the result that hold for training just the top layer of the neural network, as well as for training all layers via the neural tangent kernel.
arXiv Detail & Related papers (2020-02-04T20:21:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.