Semi-supervised Learning of Galaxy Morphology using Equivariant
Transformer Variational Autoencoders
- URL: http://arxiv.org/abs/2011.08714v1
- Date: Tue, 17 Nov 2020 15:41:18 GMT
- Title: Semi-supervised Learning of Galaxy Morphology using Equivariant
Transformer Variational Autoencoders
- Authors: Mizu Nishikawa-Toomey, Lewis Smith, Yarin Gal
- Abstract summary: We develop a Variational Autoencoder (VAE) with Equivariant Transformer layers with a classifier network from the latent space.
We show that this novel architecture leads to improvements in accuracy when used for the galaxy morphology classification task on the Galaxy Zoo data set.
- Score: 34.38960534620003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growth in the number of galaxy images is much faster than the speed at
which these galaxies can be labelled by humans. However, by leveraging the
information present in the ever growing set of unlabelled images,
semi-supervised learning could be an effective way of reducing the required
labelling and increasing classification accuracy. We develop a Variational
Autoencoder (VAE) with Equivariant Transformer layers with a classifier network
from the latent space. We show that this novel architecture leads to
improvements in accuracy when used for the galaxy morphology classification
task on the Galaxy Zoo data set. In addition we show that pre-training the
classifier network as part of the VAE using the unlabelled data leads to higher
accuracy with fewer labels compared to exiting approaches. This novel VAE has
the potential to automate galaxy morphology classification with reduced human
labelling efforts.
Related papers
- Discovering Galaxy Features via Dataset Distillation [7.121183597915665]
In many applications, Neural Nets (NNs) have classification performance on par or even exceeding human capacity.
Here, we apply this idea to the notoriously difficult task of galaxy classification.
We present a novel way to summarize and visualize prototypical galaxy morphology through the lens of neural networks.
arXiv Detail & Related papers (2023-11-29T12:39:31Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Unsupervised Mutual Transformer Learning for Multi-Gigapixel Whole Slide
Image Classification [18.452105665665858]
We propose a fully unsupervised WSI classification algorithm based on mutual transformer learning.
A discriminative learning mechanism is introduced to improve normal versus cancerous instance labeling.
In addition to unsupervised classification, we demonstrate the effectiveness of the proposed framework for weak supervision for cancer subtype classification as downstream analysis.
arXiv Detail & Related papers (2023-05-03T10:54:18Z) - From Images to Features: Unbiased Morphology Classification via
Variational Auto-Encoders and Domain Adaptation [0.8010192121024553]
We present a novel approach for the dimensionality reduction of galaxy images by leveraging a combination of variational auto-encoders (VAE) and domain adaptation (DA)
We show that 40-dimensional latent variables can effectively reproduce most morphological features in galaxy images.
We further enhance our model by tuning the VAE network via DA using galaxies in the overlapping footprint of DECaLS and BASS+MzLS.
arXiv Detail & Related papers (2023-03-15T13:54:11Z) - Unsupervised Motion Representation Learning with Capsule Autoencoders [54.81628825371412]
Motion Capsule Autoencoder (MCAE) models motion in a two-level hierarchy.
MCAE is evaluated on a novel Trajectory20 motion dataset and various real-world skeleton-based human action datasets.
arXiv Detail & Related papers (2021-10-01T16:52:03Z) - CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG
Signals [92.60744099084157]
We propose differentiable data augmentation amenable to gradient-based learning.
We demonstrate the relevance of our approach on the clinically relevant sleep staging classification task.
arXiv Detail & Related papers (2021-06-25T15:28:48Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - CrossTransformers: spatially-aware few-shot transfer [92.33252608837947]
Given new tasks with very little data, modern vision systems degrade remarkably quickly.
We show how the neural network representations which underpin modern vision systems are subject to supervision collapse.
We propose self-supervised learning to encourage general-purpose features that transfer better.
arXiv Detail & Related papers (2020-07-22T15:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.