Transfer Learning Application of Self-supervised Learning in ARPES
- URL: http://arxiv.org/abs/2208.10893v1
- Date: Tue, 23 Aug 2022 11:58:05 GMT
- Title: Transfer Learning Application of Self-supervised Learning in ARPES
- Authors: Sandy Adhitia Ekahana, Genta Indra Winata, Y. Soh, Gabriel Aeppli,
Radovic Milan, Ming Shi
- Abstract summary: Development in angle-resolved photoemission spectroscopy (ARPES) technique involves spatially resolving samples.
One of it is to label similar dispersion cuts and map them spatially.
In this work, we demonstrate that the recent development in representational learning model combined with k-means clustering can help automate that part of data analysis.
- Score: 12.019651078748236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent development in angle-resolved photoemission spectroscopy (ARPES)
technique involves spatially resolving samples while maintaining the
high-resolution feature of momentum space. This development easily expands the
data size and its complexity for data analysis, where one of it is to label
similar dispersion cuts and map them spatially. In this work, we demonstrate
that the recent development in representational learning (self-supervised
learning) model combined with k-means clustering can help automate that part of
data analysis and save precious time, albeit with low performance. Finally, we
introduce a few-shot learning (k-nearest neighbour or kNN) in representational
space where we selectively choose one (k=1) image reference for each known
label and subsequently label the rest of the data with respect to the nearest
reference image. This last approach demonstrates the strength of the
self-supervised learning to automate the image analysis in ARPES in particular
and can be generalized into any science data analysis that heavily involves
image data.
Related papers
- Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - Unsupervised Few-Shot Continual Learning for Remote Sensing Image Scene Classification [14.758282519523744]
Unsupervised flat-wide learning approach (UNISA) for unsupervised few-shot continual learning approaches of remote sensing image scene classifications.
Our numerical study with remote sensing image scene datasets and a hyperspectral dataset confirms the advantages of our solution.
arXiv Detail & Related papers (2024-06-04T03:06:41Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Semi-supervised Superpixel-based Multi-Feature Graph Learning for
Hyperspectral Image Data [0.0]
We present a novel framework for the classification of Hyperspectral Image (HSI) data in light of a very limited amount of labelled data.
We propose a multi-stage edge-efficient semi-supervised graph learning framework for HSI data.
arXiv Detail & Related papers (2021-04-27T15:36:26Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Sparse Signal Models for Data Augmentation in Deep Learning ATR [0.8999056386710496]
We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm.
We exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting.
arXiv Detail & Related papers (2020-12-16T21:46:33Z) - IntroVAC: Introspective Variational Classifiers for Learning
Interpretable Latent Subspaces [6.574517227976925]
IntroVAC learns interpretable latent subspaces by exploiting information from an additional label.
We show that IntroVAC is able to learn meaningful directions in the latent space enabling fine manipulation of image attributes.
arXiv Detail & Related papers (2020-08-03T10:21:41Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.