Self-Supervised Representation Learning for Astronomical Images
- URL: http://arxiv.org/abs/2012.13083v2
- Date: Thu, 8 Apr 2021 16:06:13 GMT
- Title: Self-Supervised Representation Learning for Astronomical Images
- Authors: Md Abul Hayat, George Stein, Peter Harrington, Zarija Luki\'c, Mustafa
Mustafa
- Abstract summary: Self-supervised learning recovers representations of sky survey images that are semantically useful.
We show that our approach can achieve the accuracy of supervised models while using 2-4 times fewer labels for training.
- Score: 1.0499611180329804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sky surveys are the largest data generators in astronomy, making automated
tools for extracting meaningful scientific information an absolute necessity.
We show that, without the need for labels, self-supervised learning recovers
representations of sky survey images that are semantically useful for a variety
of scientific tasks. These representations can be directly used as features, or
fine-tuned, to outperform supervised methods trained only on labeled data. We
apply a contrastive learning framework on multi-band galaxy photometry from the
Sloan Digital Sky Survey (SDSS) to learn image representations. We then use
them for galaxy morphology classification, and fine-tune them for photometric
redshift estimation, using labels from the Galaxy Zoo 2 dataset and SDSS
spectroscopy. In both downstream tasks, using the same learned representations,
we outperform the supervised state-of-the-art results, and we show that our
approach can achieve the accuracy of supervised models while using 2-4 times
fewer labels for training.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - Self-supervised Visualisation of Medical Image Datasets [13.05427848112207]
A self-supervised learning method, $t$-SimCNE, uses contrastive learning to directly train a 2D representation suitable for visualisation.
In this work, we used $t$-SimCNE to visualise medical image datasets, including examples from dermatology, histology, and blood microscopy.
arXiv Detail & Related papers (2024-02-22T14:04:41Z) - Spiral-Elliptical automated galaxy morphology classification from
telescope images [0.40792653193642503]
We develop two novel galaxy morphology statistics, descent average and descent variance, which can be efficiently extracted from telescope galaxy images.
We utilize the galaxy image data from the Sloan Digital Sky Survey to demonstrate the effective performance of our proposed image statistics.
arXiv Detail & Related papers (2023-10-10T22:36:52Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - Self Supervised Learning for Few Shot Hyperspectral Image Classification [57.2348804884321]
We propose to leverage Self Supervised Learning (SSL) for HSI classification.
We show that by pre-training an encoder on unlabeled pixels using Barlow-Twins, a state-of-the-art SSL algorithm, we can obtain accurate models with a handful of labels.
arXiv Detail & Related papers (2022-06-24T07:21:53Z) - Self-supervised similarity search for large scientific datasets [0.0]
We present the use of self-supervised learning to explore and exploit large unlabeled datasets.
We first train a self-supervised model to distil low-dimensional representations that are robust to symmetries, uncertainties, and noise in each image.
We then use the representations to construct and publicly release an interactive semantic similarity search tool.
arXiv Detail & Related papers (2021-10-25T18:00:00Z) - Curious Representation Learning for Embodied Intelligence [81.21764276106924]
Self-supervised representation learning has achieved remarkable success in recent years.
Yet to build truly intelligent agents, we must construct representation learning algorithms that can learn from environments.
We propose a framework, curious representation learning, which jointly learns a reinforcement learning policy and a visual representation model.
arXiv Detail & Related papers (2021-05-03T17:59:20Z) - Visual Distant Supervision for Scene Graph Generation [66.10579690929623]
Scene graph models usually require supervised learning on large quantities of labeled data with intensive human annotation.
We propose visual distant supervision, a novel paradigm of visual relation learning, which can train scene graph models without any human-labeled data.
Comprehensive experimental results show that our distantly supervised model outperforms strong weakly supervised and semi-supervised baselines.
arXiv Detail & Related papers (2021-03-29T06:35:24Z) - Estimating Galactic Distances From Images Using Self-supervised
Representation Learning [1.0499611180329804]
We use a contrastive self-supervised learning framework to estimate distances to galaxies from their photometric images.
We incorporate data augmentations from computer vision as well as an application-specific augmentation accounting for galactic dust.
We show that (1) pretraining on a large corpus of unlabeled data followed by fine-tuning on some labels can attain the accuracy of a fully-supervised model.
arXiv Detail & Related papers (2021-01-12T04:39:26Z) - Self-supervised Learning for Astronomical Image Classification [1.2891210250935146]
In Astronomy, a huge amount of image data is generated daily by photometric surveys.
We propose a technique to leverage unlabeled astronomical images to pre-train deep convolutional neural networks.
arXiv Detail & Related papers (2020-04-23T17:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.