Mesh-based graph convolutional neural network models of processes with
complex initial states
- URL: http://arxiv.org/abs/2107.00090v1
- Date: Fri, 4 Jun 2021 03:40:40 GMT
- Title: Mesh-based graph convolutional neural network models of processes with
complex initial states
- Authors: Ari Frankel and Cosmin Safta and Coleman Alleman and Reese Jones
- Abstract summary: We propose a graph convolutional neural network that utilizes the discretized representation of the initial microstructure directly, without segmentation or clustering.
We demonstrate the performance of the proposed network and compare it to traditional pixel-based convolution neural network models and feature-based graph convolutional neural networks on three large datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Predicting the evolution of a representative sample of a material with
microstructure is a fundamental problem in homogenization. In this work we
propose a graph convolutional neural network that utilizes the discretized
representation of the initial microstructure directly, without segmentation or
clustering. Compared to feature-based and pixel-based convolutional neural
network models, the proposed method has a number of advantages: (a) it is deep
in that it does not require featurization but can benefit from it, (b) it has a
simple implementation with standard convolutional filters and layers, (c) it
works natively on unstructured and structured grid data without interpolation
(unlike pixel-based convolutional neural networks), and (d) it preserves
rotational invariance like other graph-based convolutional neural networks. We
demonstrate the performance of the proposed network and compare it to
traditional pixel-based convolution neural network models and feature-based
graph convolutional neural networks on three large datasets.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Parameter Convex Neural Networks [13.42851919291587]
We propose the exponential multilayer neural network (EMLP) which is convex with regard to the parameters of the neural network under some conditions.
For late experiments, we use the same architecture to make the exponential graph convolutional network (EGCN) and do the experiment on the graph classificaion dataset.
arXiv Detail & Related papers (2022-06-11T16:44:59Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A Sparse Coding Interpretation of Neural Networks and Theoretical
Implications [0.0]
Deep convolutional neural networks have achieved unprecedented performance in various computer vision tasks.
We propose a sparse coding interpretation of neural networks that have ReLU activation.
We derive a complete convolutional neural network without normalization and pooling.
arXiv Detail & Related papers (2021-08-14T21:54:47Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Lookup subnet based Spatial Graph Convolutional neural Network [3.119764474774276]
We propose a cross-correlation based graph convolution method allowing to naturally generalize CNNs to non-Euclidean domains.
Our method has achieved or matched popular state-of-the-art results across three established graph benchmarks.
arXiv Detail & Related papers (2021-02-04T13:05:30Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Research on a New Convolutional Neural Network Model Combined with
Random Edges Adding [10.519799195357209]
A random edge adding algorithm is proposed to improve the performance of convolutional neural network model.
The simulation results show that the model recognition accuracy and training convergence speed are greatly improved by random edge adding reconstructed models.
arXiv Detail & Related papers (2020-03-17T16:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.