Self-Supervised Clustering on Image-Subtracted Data with Deep-Embedded
Self-Organizing Map
- URL: http://arxiv.org/abs/2209.06375v1
- Date: Wed, 14 Sep 2022 02:37:06 GMT
- Title: Self-Supervised Clustering on Image-Subtracted Data with Deep-Embedded
Self-Organizing Map
- Authors: Y. -L. Mong, K. Ackley, T. L. Killestein, D. K. Galloway, M. Dyer, R.
Cutter, M. J. I. Brown, J. Lyman, K. Ulaczyk, D. Steeghs, V. Dhillon, P.
O'Brien, G. Ramsay, K. Noysena, R. Kotak, R. Breton, L. Nuttall, E. Palle, D.
Pollacco, E. Thrane, S. Awiphan, U. Burhanudin, P. Chote, A. Chrimes, E. Daw,
C. Duffy, R. Eyles-Ferris, B. P. Gompertz, T. Heikkila, P. Irawati, M.
Kennedy, A. Levan, S. Littlefair, L. Makrygianni, T. Marsh, D. Mata Sanchez,
S. Mattila, J. R. Maund, J. McCormac, D. Mkrtichian, J. Mullaney, E. Rol, U.
Sawangwit, E. Stanway, R. Starling, P. Strom, S. Tooke, K. Wiersema
- Abstract summary: Self-supervised machine learning model, the deep-embedded self-organizing map (DESOM) is applied to real-bogus classification problem.
We demonstrate different model training approaches, and find that our best DESOM classifier shows a missed detection rate of 6.6% with a false positive rate of 1.5%.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing an effective automatic classifier to separate genuine sources from
artifacts is essential for transient follow-ups in wide-field optical surveys.
The identification of transient detections from the subtraction artifacts after
the image differencing process is a key step in such classifiers, known as
real-bogus classification problem. We apply a self-supervised machine learning
model, the deep-embedded self-organizing map (DESOM) to this "real-bogus"
classification problem. DESOM combines an autoencoder and a self-organizing map
to perform clustering in order to distinguish between real and bogus
detections, based on their dimensionality-reduced representations. We use 32x32
normalized detection thumbnails as the input of DESOM. We demonstrate different
model training approaches, and find that our best DESOM classifier shows a
missed detection rate of 6.6% with a false positive rate of 1.5%. DESOM offers
a more nuanced way to fine-tune the decision boundary identifying likely real
detections when used in combination with other types of classifiers, for
example built on neural networks or decision trees. We also discuss other
potential usages of DESOM and its limitations.
Related papers
- Accurate Explanation Model for Image Classifiers using Class Association Embedding [5.378105759529487]
We propose a generative explanation model that combines the advantages of global and local knowledge.
Class association embedding (CAE) encodes each sample into a pair of separated class-associated and individual codes.
Building-block coherency feature extraction algorithm is proposed that efficiently separates class-associated features from individual ones.
arXiv Detail & Related papers (2024-06-12T07:41:00Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - AnoViT: Unsupervised Anomaly Detection and Localization with Vision
Transformer-based Encoder-Decoder [3.31490164885582]
We propose a vision transformer-based encoder-decoder model, named AnoViT, to reflect normal information by additionally learning the global relationship between image patches.
The proposed model performed better than the convolution-based model on three benchmark datasets.
arXiv Detail & Related papers (2022-03-21T09:01:37Z) - Latent-Insensitive Autoencoders for Anomaly Detection and
Class-Incremental Learning [0.0]
We introduce Latent-Insensitive Autoencoder (LIS-AE) where unlabeled data from a similar domain is utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder.
We treat class-incremental learning as multiple anomaly detection tasks by adding a different latent layer for each class and use other available classes in task as negative examples to shape each latent layer.
arXiv Detail & Related papers (2021-10-25T16:53:49Z) - Zero-sample surface defect detection and classification based on
semantic feedback neural network [13.796631421521765]
We propose an Ensemble Co-training algorithm, which adaptively reduces the prediction error in image tag embedding from multiple angles.
Various experiments conducted on the zero-shot dataset and the cylinder liner dataset in the industrial field provide competitive results.
arXiv Detail & Related papers (2021-06-15T08:26:36Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Detection Method Based on Automatic Visual Shape Clustering for
Pin-Missing Defect in Transmission Lines [1.602803566465659]
Bolts are the most numerous fasteners in transmission lines and are prone to losing their split pins.
How to realize the automatic pin-missing defect detection for bolts in transmission lines so as to achieve timely and efficient trouble shooting is a difficult problem.
In this paper, an automatic detection model called Automatic Visual Shape Clustering Network (AVSCNet) for pin-missing defect is constructed.
arXiv Detail & Related papers (2020-01-17T10:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.