Particle density and critical point for studying site percolation by finite size scaling
- URL: http://arxiv.org/abs/2311.14725v2
- Date: Wed, 8 May 2024 14:13:14 GMT
- Title: Particle density and critical point for studying site percolation by finite size scaling
- Authors: Dian Xu, Shanshan Wang, Feng Gao, Wei Li, Jianmin Shen,
- Abstract summary: We study the relationship between particle number density, critical point, and latent variables in the site percolation model.
Unsupervised learning yields reliable results consistent with Monte Carlo simulations.
- Score: 6.449416869164504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning has recently achieved remarkable success in studying phase transitions. It is generally believed that the latent variables of unsupervised learning can capture the information related to phase transitions, which is usually achieved through the so-called order parameter. In most models, for instance the Ising, the order parameters are simply the particle number densities. The percolation, the simplest model which can generate a phase transition, however, has a unique order parameter which is not particle number density. In this paper, we use unsupervised learning to study the relationship between particle number density, critical point, and latent variables in the site percolation model. It is found that if the input of learning is the original configuration, then the output of unsupervised learning does not convey any information related to the phase transition. Therefore, the maximum cluster is employed in order to effectively capture the critical point of the model. Unsupervised learning yields reliable results consistent with Monte Carlo simulations. We also propose a method called Fake Finite Size Scaling (FFSS) to calculate the critical value, which improves the accuracy of fitting to a great extent.
Related papers
- GCC: Generative Calibration Clustering [55.44944397168619]
We propose a novel Generative Clustering (GCC) method to incorporate feature learning and augmentation into clustering procedure.
First, we develop a discrimirative feature alignment mechanism to discover intrinsic relationship across real and generated samples.
Second, we design a self-supervised metric learning to generate more reliable cluster assignment.
arXiv Detail & Related papers (2024-04-14T01:51:11Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Stabilizing and Improving Federated Learning with Non-IID Data and
Client Dropout [15.569507252445144]
Label distribution skew induced data heterogeniety has been shown to be a significant obstacle that limits the model performance in federated learning.
We propose a simple yet effective framework by introducing a prior-calibrated softmax function for computing the cross-entropy loss.
The improved model performance over existing baselines in the presence of non-IID data and client dropout is demonstrated.
arXiv Detail & Related papers (2023-03-11T05:17:59Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - A Deep Dive into Deep Cluster [0.2578242050187029]
DeepCluster is a simple and scalable unsupervised pretraining of visual representations.
We show that DeepCluster convergence and performance depend on the interplay between the quality of the randomly filters of the convolutional layer and the selected number of clusters.
arXiv Detail & Related papers (2022-07-24T22:55:09Z) - Hyperspherical Consistency Regularization [45.00073340936437]
We explore the relationship between self-supervised learning and supervised learning, and study how self-supervised learning helps robust data-efficient deep learning.
We propose hyperspherical consistency regularization (HCR), a simple yet effective plug-and-play method, to regularize the classifier using feature-dependent information and thus avoid bias from labels.
arXiv Detail & Related papers (2022-06-02T02:41:13Z) - Scale-Equivalent Distillation for Semi-Supervised Object Detection [57.59525453301374]
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals.
We analyze the challenges these methods meet with the empirical experiment results.
We introduce a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.
arXiv Detail & Related papers (2022-03-23T07:33:37Z) - Transfer learning of phase transitions in percolation and directed
percolation [2.0342076109301583]
We apply domain adversarial neural network (DANN) based on transfer learning to studying non-equilibrium and equilibrium phase transition models.
The DANN learning of both models yields reliable results which are comparable to the ones from Monte Carlo simulations.
arXiv Detail & Related papers (2021-12-31T15:24:09Z) - A learning algorithm with emergent scaling behavior for classifying
phase transitions [0.0]
We introduce a supervised learning algorithm for studying critical phenomena from measurement data.
We test it on the transverse field Ising chain and q=6 Potts model.
Our algorithm correctly identifies the thermodynamic phase of the system and extracts scaling behavior from projective measurements.
arXiv Detail & Related papers (2021-03-29T18:05:27Z) - Improving Face Recognition by Clustering Unlabeled Faces in the Wild [77.48677160252198]
We propose a novel identity separation method based on extreme value theory.
It greatly reduces the problems caused by overlapping-identity label noise.
Experiments on both controlled and real settings demonstrate our method's consistent improvements.
arXiv Detail & Related papers (2020-07-14T12:26:50Z) - Learning Causal Models Online [103.87959747047158]
Predictive models can rely on spurious correlations in the data for making predictions.
One solution for achieving strong generalization is to incorporate causal structures in the models.
We propose an online algorithm that continually detects and removes spurious features.
arXiv Detail & Related papers (2020-06-12T20:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.