Fast Detection of Phase Transitions with Multi-Task
Learning-by-Confusion
- URL: http://arxiv.org/abs/2311.09128v1
- Date: Wed, 15 Nov 2023 17:17:49 GMT
- Title: Fast Detection of Phase Transitions with Multi-Task
Learning-by-Confusion
- Authors: Julian Arnold, Frank Sch\"afer, Niels L\"orch
- Abstract summary: One of the most popular approaches to identifying critical points from data without prior knowledge of the underlying phases is the learning-by-confusion scheme.
Up to now, the scheme required training a distinct binary classifier for each possible splitting of the grid into two sides, resulting in a computational cost that scales linearly with the number of grid points.
In this work, we propose and showcase an alternative implementation that only requires the training of a single multi-class classifier.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning has been successfully used to study phase transitions. One
of the most popular approaches to identifying critical points from data without
prior knowledge of the underlying phases is the learning-by-confusion scheme.
As input, it requires system samples drawn from a grid of the parameter whose
change is associated with potential phase transitions. Up to now, the scheme
required training a distinct binary classifier for each possible splitting of
the grid into two sides, resulting in a computational cost that scales linearly
with the number of grid points. In this work, we propose and showcase an
alternative implementation that only requires the training of a single
multi-class classifier. Ideally, such multi-task learning eliminates the
scaling with respect to the number of grid points. In applications to the Ising
model and an image dataset generated with Stable Diffusion, we find significant
speedups that closely correspond to the ideal case, with only minor deviations.
Related papers
- A Closer Look at Few-shot Classification Again [68.44963578735877]
Few-shot classification consists of a training phase and an adaptation phase.
We empirically prove that the training algorithm and the adaptation algorithm can be completely disentangled.
Our meta-analysis for each phase reveals several interesting insights that may help better understand key aspects of few-shot classification.
arXiv Detail & Related papers (2023-01-28T16:42:05Z) - Point Cloud Upsampling via Cascaded Refinement Network [39.79759035338819]
Upsampling point cloud in a coarse-to-fine manner is a decent solution.
Existing coarse-to-fine upsampling methods require extra training strategies.
In this paper, we propose a simple yet effective cascaded refinement network.
arXiv Detail & Related papers (2022-10-08T07:09:37Z) - CloudAttention: Efficient Multi-Scale Attention Scheme For 3D Point
Cloud Learning [81.85951026033787]
We set transformers in this work and incorporate them into a hierarchical framework for shape classification and part and scene segmentation.
We also compute efficient and dynamic global cross attentions by leveraging sampling and grouping at each iteration.
The proposed hierarchical model achieves state-of-the-art shape classification in mean accuracy and yields results on par with the previous segmentation methods.
arXiv Detail & Related papers (2022-07-31T21:39:15Z) - Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain,
Active and Continual Few-Shot Learning [41.07029317930986]
We propose a variance-sensitive class of models that operates in a low-label regime.
The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier.
We further extend this approach to a transductive learning setting, proposing Transductive CNAPS.
arXiv Detail & Related papers (2022-01-13T18:59:02Z) - Transfer learning of phase transitions in percolation and directed
percolation [2.0342076109301583]
We apply domain adversarial neural network (DANN) based on transfer learning to studying non-equilibrium and equilibrium phase transition models.
The DANN learning of both models yields reliable results which are comparable to the ones from Monte Carlo simulations.
arXiv Detail & Related papers (2021-12-31T15:24:09Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Exploiting Invariance in Training Deep Neural Networks [4.169130102668252]
Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform technique that imposes invariance properties in the training of deep neural networks.
The resulting algorithm requires less parameter tuning, trains well with an initial learning rate 1.0, and easily generalizes to different tasks.
Tested on ImageNet, MS COCO, and Cityscapes datasets, our proposed technique requires fewer iterations to train, surpasses all baselines by a large margin, seamlessly works on both small and large batch size training, and applies to different computer vision tasks of image classification, object detection, and semantic segmentation.
arXiv Detail & Related papers (2021-03-30T19:18:31Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Multi-Stage Transfer Learning with an Application to Selection Process [5.933303832684138]
In multi-stage processes, decisions happen in an ordered sequence of stages.
In this work, we proposed a textitMulti-StaGe Transfer Learning (MSGTL) approach that uses knowledge from simple classifiers trained in early stages.
We show that it is possible to control the trade-off between conserving knowledge and fine-tuning using a simple probabilistic map.
arXiv Detail & Related papers (2020-06-01T21:27:04Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.