Transfer learning of phase transitions in percolation and directed
percolation
- URL: http://arxiv.org/abs/2112.15516v3
- Date: Tue, 4 Jan 2022 01:48:03 GMT
- Title: Transfer learning of phase transitions in percolation and directed
percolation
- Authors: Jianmin Shen, Feiyi Liu, Shiyang Chen, Dian Xu, Xiangna Chen,
Shengfeng Deng, Wei Li, Gabor Papp, Chunbin Yang
- Abstract summary: We apply domain adversarial neural network (DANN) based on transfer learning to studying non-equilibrium and equilibrium phase transition models.
The DANN learning of both models yields reliable results which are comparable to the ones from Monte Carlo simulations.
- Score: 2.0342076109301583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The latest advances of statistical physics have shown remarkable performance
of machine learning in identifying phase transitions. In this paper, we apply
domain adversarial neural network (DANN) based on transfer learning to studying
non-equilibrium and equilibrium phase transition models, which are percolation
model and directed percolation (DP) model, respectively. With the DANN, only a
small fraction of input configurations (2d images) needs to be labeled, which
is automatically chosen, in order to capture the critical point. To learn the
DP model, the method is refined by an iterative procedure in determining the
critical point, which is a prerequisite for the data collapse in calculating
the critical exponent $\nu_{\perp}$. We then apply the DANN to a
two-dimensional site percolation with configurations filtered to include only
the largest cluster which may contain the information related to the order
parameter. The DANN learning of both models yields reliable results which are
comparable to the ones from Monte Carlo simulations. Our study also shows that
the DANN can achieve quite high accuracy at much lower cost, compared to the
supervised learning.
Related papers
- A Bayesian Approach to Data Point Selection [24.98069363998565]
Data point selection (DPS) is becoming a critical topic in deep learning.
Existing approaches to DPS are predominantly based on a bi-level optimisation (BLO) formulation.
We propose a novel Bayesian approach to DPS.
arXiv Detail & Related papers (2024-11-06T09:04:13Z) - 4D Contrastive Superflows are Dense 3D Representation Learners [62.433137130087445]
We introduce SuperFlow, a novel framework designed to harness consecutive LiDAR-camera pairs for establishing pretraining objectives.
To further boost learning efficiency, we incorporate a plug-and-play view consistency module that enhances alignment of the knowledge distilled from camera views.
arXiv Detail & Related papers (2024-07-08T17:59:54Z) - Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment [65.15914284008973]
We propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model.
We show that the proposed algorithms converge to the stationary solutions of the IRL problem.
Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process.
arXiv Detail & Related papers (2024-05-28T07:11:05Z) - Particle density and critical point for studying site percolation by finite size scaling [6.449416869164504]
We study the relationship between particle number density, critical point, and latent variables in the site percolation model.
Unsupervised learning yields reliable results consistent with Monte Carlo simulations.
arXiv Detail & Related papers (2023-11-20T10:21:50Z) - Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - DiffusionEngine: Diffusion Model is Scalable Data Engine for Object
Detection [41.436817746749384]
Diffusion Model is a scalable data engine for object detection.
DiffusionEngine (DE) provides high-quality detection-oriented training pairs in a single stage.
arXiv Detail & Related papers (2023-09-07T17:55:01Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Determination of the critical points for systems of directed percolation
class using machine learning [0.0]
We use CNN and DBSCAN in order to determine the critical points for directed bond percolation (bond DP) model and Domany-Kinzel cellular universality (DK) model.
Our results from both algorithms show that, even for a very small values of lattice size, machine can predict the critical points accurately for both models.
arXiv Detail & Related papers (2023-07-19T20:58:12Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.