On The Potential of The Fractal Geometry and The CNNs Ability to Encode
it
- URL: http://arxiv.org/abs/2401.04141v1
- Date: Sun, 7 Jan 2024 15:22:56 GMT
- Title: On The Potential of The Fractal Geometry and The CNNs Ability to Encode
it
- Authors: Julia El Zini, Bassel Musharrafieh and Mariette Awad
- Abstract summary: The fractal dimension provides a statistical index of object complexity.
Although useful in several classification tasks, the fractal dimension is under-explored in deep learning applications.
We show that training a shallow network on fractal features achieves performance comparable to that of deep networks trained on raw data.
- Score: 1.7311053765541484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fractal dimension provides a statistical index of object complexity by
studying how the pattern changes with the measuring scale. Although useful in
several classification tasks, the fractal dimension is under-explored in deep
learning applications. In this work, we investigate the features that are
learned by deep models and we study whether these deep networks are able to
encode features as complex and high-level as the fractal dimensions.
Specifically, we conduct a correlation analysis experiment to show that deep
networks are not able to extract such a feature in none of their layers. We
combine our analytical study with a human evaluation to investigate the
differences between deep learning networks and models that operate on the
fractal feature solely. Moreover, we show the effectiveness of fractal features
in applications where the object structure is crucial for the classification
task. We empirically show that training a shallow network on fractal features
achieves performance comparable, even superior in specific cases, to that of
deep networks trained on raw data while requiring less computational resources.
Fractals improved the accuracy of the classification by 30% on average while
requiring up to 84% less time to train. We couple our empirical study with a
complexity analysis of the computational cost of extracting the proposed
fractal features, and we study its limitation.
Related papers
- Representing Topological Self-Similarity Using Fractal Feature Maps for Accurate Segmentation of Tubular Structures [12.038095281876071]
In this study, we incorporate fractal features into a deep learning model by extending FD to the pixel-level using a sliding window technique.
The resulting fractal feature maps (FFMs) are then incorporated as additional input to the model and additional weight in the loss function.
Experiments on five tubular structure datasets validate the effectiveness and robustness of our approach.
arXiv Detail & Related papers (2024-07-20T05:22:59Z) - Understanding Deep Representation Learning via Layerwise Feature
Compression and Discrimination [33.273226655730326]
We show that each layer of a deep linear network progressively compresses within-class features at a geometric rate and discriminates between-class features at a linear rate.
This is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks.
arXiv Detail & Related papers (2023-11-06T09:00:38Z) - Memorization with neural nets: going beyond the worst case [5.662924503089369]
In practice, deep neural networks are often able to easily interpolate their training data.
For real-world data, however, one intuitively expects the presence of a benign structure so that already occurs at a smaller network size than suggested by memorization capacity.
We introduce a simple randomized algorithm that, given a fixed finite dataset with two classes, with high probability constructs an interpolating three-layer neural network in time.
arXiv Detail & Related papers (2023-09-30T10:06:05Z) - How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model [47.617093812158366]
We introduce the Random Hierarchy Model: a family of synthetic tasks inspired by the hierarchical structure of language and images.
We find that deep networks learn the task by developing internal representations invariant to exchanging equivalent groups.
Our results indicate how deep networks overcome the curse of dimensionality by building invariant representations.
arXiv Detail & Related papers (2023-07-05T09:11:09Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Data-driven effective model shows a liquid-like deep learning [2.0711789781518752]
It remains unknown what the landscape looks like for deep networks of binary synapses.
We propose a statistical mechanics framework by directly building a least structured model of the high-dimensional weight space.
Our data-driven model thus provides a statistical mechanics insight about why deep learning is unreasonably effective in terms of the high-dimensional weight space.
arXiv Detail & Related papers (2020-07-16T04:02:48Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.