Inferring topological transitions in pattern-forming processes with
self-supervised learning
- URL: http://arxiv.org/abs/2203.10204v1
- Date: Sat, 19 Mar 2022 00:47:50 GMT
- Title: Inferring topological transitions in pattern-forming processes with
self-supervised learning
- Authors: Marcin Abram, Keith Burghardt, Greg Ver Steeg, Aram Galstyan, Remi
Dingreville
- Abstract summary: We use a self-supervised approach to predict process parameters from observed microstructures using neural networks.
We show that the difficulty of performing this prediction task is related to the goal of discovering microstructure regimes.
This approach opens a promising path forward for discovering and understanding unseen or hard-to-detect transition regimes.
- Score: 25.90630151217217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The identification and classification of transitions in topological and
microstructural regimes in pattern-forming processes is critical for
understanding and fabricating microstructurally precise novel materials in many
application domains. Unfortunately, relevant microstructure transitions may
depend on process parameters in subtle and complex ways that are not captured
by the classic theory of phase transition. While supervised machine learning
methods may be useful for identifying transition regimes, they need labels
which require prior knowledge of order parameters or relevant structures.
Motivated by the universality principle for dynamical systems, we instead use a
self-supervised approach to solve the inverse problem of predicting process
parameters from observed microstructures using neural networks. This approach
does not require labeled data about the target task of predicting
microstructure transitions. We show that the difficulty of performing this
prediction task is related to the goal of discovering microstructure regimes,
because qualitative changes in microstructural patterns correspond to changes
in uncertainty for our self-supervised prediction problem. We demonstrate the
value of our approach by automatically discovering transitions in
microstructural regimes in two distinct pattern-forming processes: the spinodal
decomposition of a two-phase mixture and the formation of concentration
modulations of binary alloys during physical vapor deposition of thin films.
This approach opens a promising path forward for discovering and understanding
unseen or hard-to-detect transition regimes, and ultimately for controlling
complex pattern-forming processes.
Related papers
- STEM Diffraction Pattern Analysis with Deep Learning Networks [0.0]
This work presents a machine learning-based approach for predicting Euler angles directly from scanning transmission electron microscopy (STEM) diffraction patterns (DPs)<n>It enables the automated generation of high-resolution crystal orientation maps, facilitating the analysis of internal microstructures at the nanoscale.<n>Three deep learning architectures--convolutional neural networks (CNNs), Dense Convolutional Networks (DenseNets), and Shifted Windows (Swin) Transformers--are evaluated, using an experimentally acquired dataset labelled via a commercial TM algorithm.
arXiv Detail & Related papers (2025-07-02T16:58:09Z) - Generalized Linear Mode Connectivity for Transformers [87.32299363530996]
A striking phenomenon is linear mode connectivity (LMC), where independently trained models can be connected by low- or zero-loss paths.<n>Prior work has predominantly focused on neuron re-ordering through permutations, but such approaches are limited in scope.<n>We introduce a unified framework that captures four symmetry classes: permutations, semi-permutations, transformations, and general invertible maps.<n>This generalization enables, for the first time, the discovery of low- and zero-barrier linear paths between independently trained Vision Transformers and GPT-2 models.
arXiv Detail & Related papers (2025-06-28T01:46:36Z) - Neural Network Reprogrammability: A Unified Theme on Model Reprogramming, Prompt Tuning, and Prompt Instruction [55.914891182214475]
We introduce neural network reprogrammability as a unifying framework for model adaptation.<n>We present a taxonomy that categorizes such information manipulation approaches across four key dimensions.<n>We also analyze remaining technical challenges and ethical considerations.
arXiv Detail & Related papers (2025-06-05T05:42:27Z) - Uncovering Magnetic Phases with Synthetic Data and Physics-Informed Training [0.0]
We investigate the efficient learning of magnetic phases using artificial neural networks trained on synthetic data.<n>We incorporate two key forms of physics-informed guidance to enhance model performance.<n>Our results show that synthetic, structured, and computationally efficient training schemes can reveal physically meaningful phase boundaries.
arXiv Detail & Related papers (2025-05-15T15:16:16Z) - Geometry of Learning -- L2 Phase Transitions in Deep and Shallow Neural Networks [0.3683202928838613]
This paper establishes a unified framework for such transitions by integrating the Ricci curvature of the loss landscape with regularizer-driven deep learning.<n>Our work paves the way for more informed regularization strategies and potentially new methods for probing the intrinsic structure of neural networks beyond the L2 context.
arXiv Detail & Related papers (2025-05-10T11:02:30Z) - Learning Metal Microstructural Heterogeneity through Spatial Mapping of Diffraction Latent Space Features [0.2692359362045324]
It is crucial to develop a data-reduced representation of metal microstructures.
This need is particularly relevant for metallic materials processed through additive manufacturing.
We propose the physical spatial mapping of metal diffraction latent space features.
arXiv Detail & Related papers (2025-01-30T00:16:07Z) - High-Dimensional Markov-switching Ordinary Differential Processes [23.17395115394655]
We develop a two-stage algorithm that first recovers the continuous sample path from discrete samples and then estimates the parameters of the processes.
We provide novel theoretical insights into the statistical error and linear convergence guarantee when the processes are $beta$-mixing.
We apply this model to investigate the differences in resting-state brain networks between the ADHD group and normal controls.
arXiv Detail & Related papers (2024-12-30T18:41:28Z) - Neuron: Learning Context-Aware Evolving Representations for Zero-Shot Skeleton Action Recognition [64.56321246196859]
We propose a novel dyNamically Evolving dUal skeleton-semantic syneRgistic framework.
We first construct the spatial-temporal evolving micro-prototypes and integrate dynamic context-aware side information.
We introduce the spatial compression and temporal memory mechanisms to guide the growth of spatial-temporal micro-prototypes.
arXiv Detail & Related papers (2024-11-18T05:16:11Z) - Learning to Predict Mutation Effects of Protein-Protein Interactions by Microenvironment-aware Hierarchical Prompt Learning [78.38442423223832]
We develop a novel codebook pre-training task, namely masked microenvironment modeling.
We demonstrate superior performance and training efficiency over state-of-the-art pre-training-based methods in mutation effect prediction.
arXiv Detail & Related papers (2024-05-16T03:53:21Z) - A phase transition between positional and semantic learning in a solvable model of dot-product attention [30.96921029675713]
Morelinear model dot-product attention is studied as a non-dimensional self-attention layer with trainable and low-dimensional query and key data.
We show that either a positional attention mechanism (with tokens each other based on their respective positions) or a semantic attention mechanism (with tokens tied to each other based their meaning) or a transition from the former to the latter with increasing sample complexity.
arXiv Detail & Related papers (2024-02-06T11:13:54Z) - In-Context Convergence of Transformers [63.04956160537308]
We study the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent.
For data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process.
arXiv Detail & Related papers (2023-10-08T17:55:33Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - A Neural Network Transformer Model for Composite Microstructure Homogenization [1.2277343096128712]
Homogenization methods, such as the Mori-Tanaka method, offer rapid homogenization for a wide range of constituent properties.
This paper illustrates a transformer neural network architecture that captures the knowledge of various microstructures.
The network predicts the history-dependent, non-linear, and homogenized stress-strain response.
arXiv Detail & Related papers (2023-04-16T19:57:52Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - A Causality-Based Learning Approach for Discovering the Underlying
Dynamics of Complex Systems from Partial Observations with Stochastic
Parameterization [1.2882319878552302]
This paper develops a new iterative learning algorithm for complex turbulent systems with partial observations.
It alternates between identifying model structures, recovering unobserved variables, and estimating parameters.
Numerical experiments show that the new algorithm succeeds in identifying the model structure and providing suitable parameterizations for many complex nonlinear systems.
arXiv Detail & Related papers (2022-08-19T00:35:03Z) - Unsupervised machine learning of topological phase transitions from
experimental data [52.77024349608834]
We apply unsupervised machine learning techniques to experimental data from ultracold atoms.
We obtain the topological phase diagram of the Haldane model in a completely unbiased fashion.
Our work provides a benchmark for unsupervised detection of new exotic phases in complex many-body systems.
arXiv Detail & Related papers (2021-01-14T16:38:21Z) - Data-Driven Topology Optimization with Multiclass Microstructures using
Latent Variable Gaussian Process [18.17435834037483]
We develop a multi-response latent-variable Gaussian process (LVGP) model for the microstructure libraries of metamaterials.
The MR-LVGP model embeds the mixed variables into a continuous design space based on their collective effects on the responses.
We show that considering multiclass microstructures can lead to improved performance due to the consistent load-transfer paths for micro- and macro-structures.
arXiv Detail & Related papers (2020-06-27T03:55:52Z) - Masked Language Modeling for Proteins via Linearly Scalable Long-Context
Transformers [42.93754828584075]
We present a new Transformer architecture, Performer, based on Fast Attention Via Orthogonal Random features (FAVOR)
Our mechanism scales linearly rather than quadratically in the number of tokens in the sequence, is characterized by sub-quadratic space complexity and does not incorporate any sparsity pattern priors.
It provides strong theoretical guarantees: unbiased estimation of the attention matrix and uniform convergence.
arXiv Detail & Related papers (2020-06-05T17:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.