DNN-Based Topology Optimisation: Spatial Invariance and Neural Tangent
Kernel
- URL: http://arxiv.org/abs/2106.05710v1
- Date: Thu, 10 Jun 2021 12:49:55 GMT
- Title: DNN-Based Topology Optimisation: Spatial Invariance and Neural Tangent
Kernel
- Authors: Benjamin Dupuis and Arthur Jacot
- Abstract summary: We study the SIMP method with a density field generated by a fully-connected neural network, taking the coordinates as inputs.
We show that the use of DNNs leads to a filtering effect similar to traditional filtering techniques for SIMP, with a filter described by the Neural Tangent Kernel (NTK)
- Score: 7.106986689736828
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the SIMP method with a density field generated by a fully-connected
neural network, taking the coordinates as inputs. In the large width limit, we
show that the use of DNNs leads to a filtering effect similar to traditional
filtering techniques for SIMP, with a filter described by the Neural Tangent
Kernel (NTK). This filter is however not invariant under translation, leading
to visual artifacts and non-optimal shapes. We propose two embeddings of the
input coordinates, which lead to (approximate) spatial invariance of the NTK
and of the filter. We empirically confirm our theoretical observations and
study how the filter size is affected by the architecture of the network. Our
solution can easily be applied to any other coordinates-based generation
method.
Related papers
- Cross-Space Adaptive Filter: Integrating Graph Topology and Node
Attributes for Alleviating the Over-smoothing Problem [39.347616859256256]
A Graph Convolutional Network (GCN) uses a low-pass filter to extract low-frequency signals from graph topology.
Various methods have been proposed to create an adaptive filter by incorporating an extra filter extracted from the graph topology.
We propose a cross-space adaptive filter, called CSF, to produce the adaptive-frequency information extracted from both the topology and attribute spaces.
arXiv Detail & Related papers (2024-01-26T14:02:29Z) - Memory-efficient particle filter recurrent neural network for object
localization [53.68402839500528]
This study proposes a novel memory-efficient recurrent neural network (RNN) architecture specified to solve the object localization problem.
We take the idea of the classical particle filter and combine it with GRU RNN architecture.
In our experiments, the mePFRNN model provides more precise localization than the considered competitors and requires fewer trained parameters.
arXiv Detail & Related papers (2023-10-02T19:41:19Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Understanding the Basis of Graph Convolutional Neural Networks via an
Intuitive Matched Filtering Approach [7.826806223782053]
Graph Convolutional Neural Networks (GCNN) are becoming a preferred model for data processing on irregular domains.
We show that their convolution layers effectively perform matched filtering of input data with the chosen patterns.
A numerical example guides the reader through the various steps of GCNN operation and learning both visually and numerically.
arXiv Detail & Related papers (2021-08-23T12:41:06Z) - Resolution learning in deep convolutional networks using scale-space
theory [31.275270391367425]
Resolution in deep convolutional neural networks (CNNs) is typically bounded by the receptive field size through filter sizes, and subsampling layers or strided convolutions on feature maps.
We propose to do away with hard-coded resolution hyper- parameters and aim to learn the appropriate resolution from data.
We use scale-space theory to obtain a self-similar parametrization of filters and make use of the N-Jet: a truncated Taylor series to approximate a filter by a learned combination of Gaussian derivative filters.
arXiv Detail & Related papers (2021-06-07T08:23:02Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Compressing Deep CNNs using Basis Representation and Spectral
Fine-tuning [2.578242050187029]
We propose an efficient and straightforward method for compressing deep convolutional neural networks (CNNs)
Specifically, any spatial convolution layer of the CNN can be replaced by two successive convolution layers.
We fine-tune both the basis and the filter representation to directly mitigate any performance loss due to the truncation.
arXiv Detail & Related papers (2021-05-21T16:14:26Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.