Particle density and critical point for studying site percolation by finite size scaling
- URL: http://arxiv.org/abs/2311.14725v2
- Date: Wed, 8 May 2024 14:13:14 GMT
- Title: Particle density and critical point for studying site percolation by finite size scaling
- Authors: Dian Xu, Shanshan Wang, Feng Gao, Wei Li, Jianmin Shen,
- Abstract summary: We study the relationship between particle number density, critical point, and latent variables in the site percolation model.
Unsupervised learning yields reliable results consistent with Monte Carlo simulations.
- Score: 6.449416869164504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning has recently achieved remarkable success in studying phase transitions. It is generally believed that the latent variables of unsupervised learning can capture the information related to phase transitions, which is usually achieved through the so-called order parameter. In most models, for instance the Ising, the order parameters are simply the particle number densities. The percolation, the simplest model which can generate a phase transition, however, has a unique order parameter which is not particle number density. In this paper, we use unsupervised learning to study the relationship between particle number density, critical point, and latent variables in the site percolation model. It is found that if the input of learning is the original configuration, then the output of unsupervised learning does not convey any information related to the phase transition. Therefore, the maximum cluster is employed in order to effectively capture the critical point of the model. Unsupervised learning yields reliable results consistent with Monte Carlo simulations. We also propose a method called Fake Finite Size Scaling (FFSS) to calculate the critical value, which improves the accuracy of fitting to a great extent.
Related papers
- Towards Learnable Anchor for Deep Multi-View Clustering [49.767879678193005]
In this paper, we propose the Deep Multi-view Anchor Clustering (DMAC) model that performs clustering in linear time.
With the optimal anchors, the full sample graph is calculated to derive a discriminative embedding for clustering.
Experiments on several datasets demonstrate superior performance and efficiency of DMAC compared to state-of-the-art competitors.
arXiv Detail & Related papers (2025-03-16T09:38:11Z) - Studying Classifier(-Free) Guidance From a Classifier-Centric Perspective [100.54185280153753]
We find that both classifier guidance and classifier-free guidance achieve conditional generation by pushing the denoising diffusion trajectories away from decision boundaries.
We propose a generic postprocessing step built upon flow-matching to shrink the gap between the learned distribution for a pretrained denoising diffusion model and the real data distribution.
arXiv Detail & Related papers (2025-03-13T17:59:59Z) - Large-Scale Targeted Cause Discovery with Data-Driven Learning [66.86881771339145]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.
By employing a local-inference strategy, our approach scales with linear complexity in the number of variables, efficiently scaling up to thousands of variables.
Empirical results demonstrate superior performance in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Stabilizing and Improving Federated Learning with Non-IID Data and
Client Dropout [15.569507252445144]
Label distribution skew induced data heterogeniety has been shown to be a significant obstacle that limits the model performance in federated learning.
We propose a simple yet effective framework by introducing a prior-calibrated softmax function for computing the cross-entropy loss.
The improved model performance over existing baselines in the presence of non-IID data and client dropout is demonstrated.
arXiv Detail & Related papers (2023-03-11T05:17:59Z) - Particle-Based Score Estimation for State Space Model Learning in
Autonomous Driving [62.053071723903834]
Multi-object state estimation is a fundamental problem for robotic applications.
We consider learning maximum-likelihood parameters using particle methods.
We apply our method to real data collected from autonomous vehicles.
arXiv Detail & Related papers (2022-12-14T01:21:05Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - A Deep Dive into Deep Cluster [0.2578242050187029]
DeepCluster is a simple and scalable unsupervised pretraining of visual representations.
We show that DeepCluster convergence and performance depend on the interplay between the quality of the randomly filters of the convolutional layer and the selected number of clusters.
arXiv Detail & Related papers (2022-07-24T22:55:09Z) - Hyperspherical Consistency Regularization [45.00073340936437]
We explore the relationship between self-supervised learning and supervised learning, and study how self-supervised learning helps robust data-efficient deep learning.
We propose hyperspherical consistency regularization (HCR), a simple yet effective plug-and-play method, to regularize the classifier using feature-dependent information and thus avoid bias from labels.
arXiv Detail & Related papers (2022-06-02T02:41:13Z) - Fundamental limits to learning closed-form mathematical models from data [0.0]
Given a noisy dataset, when is it possible to learn the true generating model from the data alone?
We show that this problem displays a transition from a low-noise phase in which the true model can be learned, to a phase in which the observation noise is too high for the true model to be learned by any method.
arXiv Detail & Related papers (2022-04-06T10:00:33Z) - Transfer learning of phase transitions in percolation and directed
percolation [2.0342076109301583]
We apply domain adversarial neural network (DANN) based on transfer learning to studying non-equilibrium and equilibrium phase transition models.
The DANN learning of both models yields reliable results which are comparable to the ones from Monte Carlo simulations.
arXiv Detail & Related papers (2021-12-31T15:24:09Z) - A learning algorithm with emergent scaling behavior for classifying
phase transitions [0.0]
We introduce a supervised learning algorithm for studying critical phenomena from measurement data.
We test it on the transverse field Ising chain and q=6 Potts model.
Our algorithm correctly identifies the thermodynamic phase of the system and extracts scaling behavior from projective measurements.
arXiv Detail & Related papers (2021-03-29T18:05:27Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Unsupervised machine learning of topological phase transitions from
experimental data [52.77024349608834]
We apply unsupervised machine learning techniques to experimental data from ultracold atoms.
We obtain the topological phase diagram of the Haldane model in a completely unbiased fashion.
Our work provides a benchmark for unsupervised detection of new exotic phases in complex many-body systems.
arXiv Detail & Related papers (2021-01-14T16:38:21Z) - Emergence of a finite-size-scaling function in the supervised learning
of the Ising phase transition [0.7658140759553149]
We investigate the connection between the supervised learning of the binary phase classification in the ferromagnetic Ising model and the standard finite-size-scaling theory of the second-order phase transition.
We show that just one free parameter is capable enough to describe the data-driven emergence of the universal finite-size-scaling function in the network output.
arXiv Detail & Related papers (2020-10-01T12:34:12Z) - Improving Face Recognition by Clustering Unlabeled Faces in the Wild [77.48677160252198]
We propose a novel identity separation method based on extreme value theory.
It greatly reduces the problems caused by overlapping-identity label noise.
Experiments on both controlled and real settings demonstrate our method's consistent improvements.
arXiv Detail & Related papers (2020-07-14T12:26:50Z) - Interpolation and Learning with Scale Dependent Kernels [91.41836461193488]
We study the learning properties of nonparametric ridge-less least squares.
We consider the common case of estimators defined by scale dependent kernels.
arXiv Detail & Related papers (2020-06-17T16:43:37Z) - Learning Causal Models Online [103.87959747047158]
Predictive models can rely on spurious correlations in the data for making predictions.
One solution for achieving strong generalization is to incorporate causal structures in the models.
We propose an online algorithm that continually detects and removes spurious features.
arXiv Detail & Related papers (2020-06-12T20:49:20Z) - Unsupervised machine learning of quantum phase transitions using
diffusion maps [77.34726150561087]
We show that the diffusion map method, which performs nonlinear dimensionality reduction and spectral clustering of the measurement data, has significant potential for learning complex phase transitions unsupervised.
This method works for measurements of local observables in a single basis and is thus readily applicable to many experimental quantum simulators.
arXiv Detail & Related papers (2020-03-16T18:40:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.