Estimating Galactic Distances From Images Using Self-supervised
Representation Learning
- URL: http://arxiv.org/abs/2101.04293v1
- Date: Tue, 12 Jan 2021 04:39:26 GMT
- Title: Estimating Galactic Distances From Images Using Self-supervised
Representation Learning
- Authors: Md Abul Hayat, Peter Harrington, George Stein, Zarija Luki\'c, Mustafa
Mustafa
- Abstract summary: We use a contrastive self-supervised learning framework to estimate distances to galaxies from their photometric images.
We incorporate data augmentations from computer vision as well as an application-specific augmentation accounting for galactic dust.
We show that (1) pretraining on a large corpus of unlabeled data followed by fine-tuning on some labels can attain the accuracy of a fully-supervised model.
- Score: 1.0499611180329804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We use a contrastive self-supervised learning framework to estimate distances
to galaxies from their photometric images. We incorporate data augmentations
from computer vision as well as an application-specific augmentation accounting
for galactic dust. We find that the resulting visual representations of galaxy
images are semantically useful and allow for fast similarity searches, and can
be successfully fine-tuned for the task of redshift estimation. We show that
(1) pretraining on a large corpus of unlabeled data followed by fine-tuning on
some labels can attain the accuracy of a fully-supervised model which requires
2-4x more labeled data, and (2) that by fine-tuning our self-supervised
representations using all available data labels in the Main Galaxy Sample of
the Sloan Digital Sky Survey (SDSS), we outperform the state-of-the-art
supervised learning method.
Related papers
- Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - Self-Supervised Representation Learning from Temporal Ordering of
Automated Driving Sequences [49.91741677556553]
We propose TempO, a temporal ordering pretext task for pre-training region-level feature representations for perception tasks.
We embed each frame by an unordered set of proposal feature vectors, a representation that is natural for object detection or tracking systems.
Extensive evaluations on the BDD100K, nuImages, and MOT17 datasets show that our TempO pre-training approach outperforms single-frame self-supervised learning methods.
arXiv Detail & Related papers (2023-02-17T18:18:27Z) - Semi-Supervised Image Captioning by Adversarially Propagating Labeled
Data [95.0476489266988]
We present a novel data-efficient semi-supervised framework to improve the generalization of image captioning models.
Our proposed method trains a captioner to learn from a paired data and to progressively associate unpaired data.
Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired dataset.
arXiv Detail & Related papers (2023-01-26T15:25:43Z) - Self-Supervised Pretraining on Satellite Imagery: a Case Study on
Label-Efficient Vehicle Detection [0.0]
We study in-domain self-supervised representation learning for object detection on very high resolution optical satellite imagery.
We use the large land use classification dataset Functional Map of the World to pretrain representations with an extension of the Momentum Contrast framework.
We then investigate this model's transferability on a real-world task of fine-grained vehicle detection and classification on Preligens proprietary data.
arXiv Detail & Related papers (2022-10-21T08:41:22Z) - Transfer Learning Application of Self-supervised Learning in ARPES [12.019651078748236]
Development in angle-resolved photoemission spectroscopy (ARPES) technique involves spatially resolving samples.
One of it is to label similar dispersion cuts and map them spatially.
In this work, we demonstrate that the recent development in representational learning model combined with k-means clustering can help automate that part of data analysis.
arXiv Detail & Related papers (2022-08-23T11:58:05Z) - Self-supervised similarity search for large scientific datasets [0.0]
We present the use of self-supervised learning to explore and exploit large unlabeled datasets.
We first train a self-supervised model to distil low-dimensional representations that are robust to symmetries, uncertainties, and noise in each image.
We then use the representations to construct and publicly release an interactive semantic similarity search tool.
arXiv Detail & Related papers (2021-10-25T18:00:00Z) - Visual Distant Supervision for Scene Graph Generation [66.10579690929623]
Scene graph models usually require supervised learning on large quantities of labeled data with intensive human annotation.
We propose visual distant supervision, a novel paradigm of visual relation learning, which can train scene graph models without any human-labeled data.
Comprehensive experimental results show that our distantly supervised model outperforms strong weakly supervised and semi-supervised baselines.
arXiv Detail & Related papers (2021-03-29T06:35:24Z) - Self-Supervised Representation Learning for Astronomical Images [1.0499611180329804]
Self-supervised learning recovers representations of sky survey images that are semantically useful.
We show that our approach can achieve the accuracy of supervised models while using 2-4 times fewer labels for training.
arXiv Detail & Related papers (2020-12-24T03:25:36Z) - Semi-Automatic Data Annotation guided by Feature Space Projection [117.9296191012968]
We present a semi-automatic data annotation approach based on suitable feature space projection and semi-supervised label estimation.
We validate our method on the popular MNIST dataset and on images of human intestinal parasites with and without fecal impurities.
Our results demonstrate the added-value of visual analytics tools that combine complementary abilities of humans and machines for more effective machine learning.
arXiv Detail & Related papers (2020-07-27T17:03:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.