QuadConv: Quadrature-Based Convolutions with Applications to Non-Uniform
PDE Data Compression
- URL: http://arxiv.org/abs/2211.05151v3
- Date: Mon, 28 Aug 2023 14:38:11 GMT
- Title: QuadConv: Quadrature-Based Convolutions with Applications to Non-Uniform
PDE Data Compression
- Authors: Kevin Doherty, Cooper Simpson, Stephen Becker, Alireza Doostan
- Abstract summary: We present a new convolution layer for deep learning architectures which we call QuadConv.
Our operator is developed explicitly for use on non-uniform, mesh-based data.
We show that QuadConv can match the performance of standard discrete convolutions on uniform grid data.
- Score: 6.488002704957669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new convolution layer for deep learning architectures which we
call QuadConv -- an approximation to continuous convolution via quadrature. Our
operator is developed explicitly for use on non-uniform, mesh-based data, and
accomplishes this by learning a continuous kernel that can be sampled at
arbitrary locations. Moreover, the construction of our operator admits an
efficient implementation which we detail and construct. As an experimental
validation of our operator, we consider the task of compressing partial
differential equation (PDE) simulation data from fixed meshes. We show that
QuadConv can match the performance of standard discrete convolutions on uniform
grid data by comparing a QuadConv autoencoder (QCAE) to a standard
convolutional autoencoder (CAE). Further, we show that the QCAE can maintain
this accuracy even on non-uniform data. In both cases, QuadConv also
outperforms alternative unstructured convolution methods such as graph
convolution.
Related papers
- Multi-Convformer: Extending Conformer with Multiple Convolution Kernels [64.4442240213399]
We introduce Multi-Convformer that uses multiple convolution kernels within the convolution module of the Conformer in conjunction with gating.
Our model rivals existing Conformer variants such as CgMLP and E-Branchformer in performance, while being more parameter efficient.
We empirically compare our approach with Conformer and its variants across four different datasets and three different modelling paradigms and show up to 8% relative word error rate(WER) improvements.
arXiv Detail & Related papers (2024-07-04T08:08:12Z) - LDConv: Linear deformable convolution for improving convolutional neural networks [18.814748446649627]
Linear Deformable Convolution (LDConv) is a plug-and-play convolutional operation that can replace the convolutional operation to improve network performance.
LDConv corrects the growth trend of the number of parameters for standard convolution and Deformable Conv to a linear growth.
arXiv Detail & Related papers (2023-11-20T07:54:54Z) - Soft Convex Quantization: Revisiting Vector Quantization with Convex
Optimization [40.1651740183975]
We propose Soft Convex Quantization (SCQ) as a direct substitute for Vector Quantization (VQ)
SCQ works like a differentiable convex optimization (DCO) layer.
We demonstrate its efficacy on the CIFAR-10, GTSRB and LSUN datasets.
arXiv Detail & Related papers (2023-10-04T17:45:14Z) - Why Approximate Matrix Square Root Outperforms Accurate SVD in Global
Covariance Pooling? [59.820507600960745]
We propose a new GCP meta-layer that uses SVD in the forward pass, and Pad'e Approximants in the backward propagation to compute the gradients.
The proposed meta-layer has been integrated into different CNN models and achieves state-of-the-art performances on both large-scale and fine-grained datasets.
arXiv Detail & Related papers (2021-05-06T08:03:45Z) - CKConv: Continuous Kernel Convolution For Sequential Data [23.228639801282966]
Continuous Kernel Convolutional Networks (CKCNNs) are designed to handle non-uniformly sampled datasets and irregularly-sampled data.
CKCNNs match or perform better than neural ODEs designed for these purposes in a much faster and simpler manner.
arXiv Detail & Related papers (2021-02-04T13:51:19Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - DO-Conv: Depthwise Over-parameterized Convolutional Layer [66.46704754669169]
We propose to augment a convolutional layer with an additional depthwise convolution, where each input channel is convolved with a different 2D kernel.
We show with extensive experiments that the mere replacement of conventional convolutional layers with DO-Conv layers boosts the performance of CNNs.
arXiv Detail & Related papers (2020-06-22T06:57:10Z) - Dynamic Region-Aware Convolution [85.20099799084026]
We propose a new convolution called Dynamic Region-Aware Convolution (DRConv), which can automatically assign multiple filters to corresponding spatial regions.
On ImageNet classification, DRConv-based ShuffleNetV2-0.5x achieves state-of-the-art performance of 67.1% at 46M multiply-adds level with 6.3% relative improvement.
arXiv Detail & Related papers (2020-03-27T05:49:57Z) - Quaternion Equivariant Capsule Networks for 3D Point Clouds [58.566467950463306]
We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations.
We connect dynamic routing between capsules to the well-known Weiszfeld algorithm.
Based on our operator, we build a capsule network that disentangles geometry from pose.
arXiv Detail & Related papers (2019-12-27T13:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.