Locally orderless networks
- URL: http://arxiv.org/abs/2406.13514v1
- Date: Wed, 19 Jun 2024 13:01:09 GMT
- Title: Locally orderless networks
- Authors: Jon Sporring, Peidi Xu, Jiahao Lu, François Lauze, Sune Darkner,
- Abstract summary: We present Locally Orderless Networks (LON) and its theoretic foundation which links it to Convolutional Neural Networks (CNN)
We compare LON, CNN, and Scale-space histograms on single-layer networks.
We show how LON and CNN can emulate each other, how LON expands the set of functionals computable to non-linear functions such as squaring.
- Score: 4.243395770157735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Locally Orderless Networks (LON) and its theoretic foundation which links it to Convolutional Neural Networks (CNN), to Scale-space histograms, and measurement theory. The key elements are a regular sampling of the bias and the derivative of the activation function. We compare LON, CNN, and Scale-space histograms on prototypical single-layer networks. We show how LON and CNN can emulate each other, how LON expands the set of functionals computable to non-linear functions such as squaring. We demonstrate simple networks which illustrate the improved performance of LON over CNN on simple tasks for estimating the gradient magnitude squared, for regressing shape area and perimeter lengths, and for explainability of individual pixels' influence on the result.
Related papers
- BLIS-Net: Classifying and Analyzing Signals on Graphs [20.345611294709244]
Graph neural networks (GNNs) have emerged as a powerful tool for tasks such as node classification and graph classification.
We introduce the BLIS-Net (Bi-Lipschitz Scattering Net), a novel GNN that builds on the previously introduced geometric scattering transform.
We show that BLIS-Net achieves superior performance on both synthetic and real-world data sets based on traffic flow and fMRI data.
arXiv Detail & Related papers (2023-10-26T17:03:14Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models [9.96121040675476]
This manuscript explores how properties of functions learned by neural networks of depth greater than two layers affect predictions.
Our framework considers a family of networks of varying depths that all have the same capacity but different representation costs.
arXiv Detail & Related papers (2023-05-24T22:10:12Z) - Continuous approximation by convolutional neural networks with a
sigmoidal function [0.0]
We present a class of convolutional neural networks (CNNs) called non-overlapping CNNs.
We prove that such networks with sigmoidal activation function are capable of approximating arbitrary continuous function defined on compact input sets with any desired degree of accuracy.
arXiv Detail & Related papers (2022-09-27T12:31:36Z) - Learnable Filters for Geometric Scattering Modules [64.03877398967282]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2022-08-15T22:30:07Z) - Quantized convolutional neural networks through the lens of partial
differential equations [6.88204255655161]
Quantization of Convolutional Neural Networks (CNNs) is a common approach to ease the computational burden involved in the deployment of CNNs.
In this work, we explore ways to improve quantized CNNs using PDE-based perspective and analysis.
arXiv Detail & Related papers (2021-08-31T22:18:52Z) - Convolutional Neural Networks Demystified: A Matched Filtering
Perspective Based Tutorial [7.826806223782053]
Convolutional Neural Networks (CNN) are a de-facto standard for the analysis of large volumes of signals and images.
We revisit their operation from first principles and a matched filtering perspective.
It is our hope that this tutorial will help shed new light and physical intuition into the understanding and further development of deep neural networks.
arXiv Detail & Related papers (2021-08-26T09:07:49Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.