Self-supervised Learning for Sonar Image Classification
- URL: http://arxiv.org/abs/2204.09323v1
- Date: Wed, 20 Apr 2022 08:58:35 GMT
- Title: Self-supervised Learning for Sonar Image Classification
- Authors: Alan Preciado-Grijalva and Bilal Wehbe and Miguel Bande Firvida and
Matias Valdenegro-Toro
- Abstract summary: Self-supervised learning has proved to be a powerful approach to learn image representations without the need of large labeled datasets.
We present pre-training and transfer learning results on real-life sonar image datasets.
- Score: 6.1947705963945845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning has proved to be a powerful approach to learn image
representations without the need of large labeled datasets. For underwater
robotics, it is of great interest to design computer vision algorithms to
improve perception capabilities such as sonar image classification. Due to the
confidential nature of sonar imaging and the difficulty to interpret sonar
images, it is challenging to create public large labeled sonar datasets to
train supervised learning algorithms. In this work, we investigate the
potential of three self-supervised learning methods (RotNet, Denoising
Autoencoders, and Jigsaw) to learn high-quality sonar image representation
without the need of human labels. We present pre-training and transfer learning
results on real-life sonar image datasets. Our results indicate that
self-supervised pre-training yields classification performance comparable to
supervised pre-training in a few-shot transfer learning setup across all three
methods. Code and self-supervised pre-trained models are be available at
https://github.com/agrija9/ssl-sonar-images
Related papers
- Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - Motion Degeneracy in Self-supervised Learning of Elevation Angle
Estimation for 2D Forward-Looking Sonar [4.683630397028384]
This study aims to realize stable self-supervised learning of elevation angle estimation without pretraining using synthetic images.
We first analyze the motion field of 2D forward-looking sonar, which is related to the main supervision signal.
arXiv Detail & Related papers (2023-07-30T08:06:11Z) - Mixture of Self-Supervised Learning [2.191505742658975]
Self-supervised learning works by using a pretext task which will be trained on the model before being applied to a specific task.
Previous studies have only used one type of transformation as a pretext task.
This raises the question of how it affects if more than one pretext task is used and to use a gating network to combine all pretext tasks.
arXiv Detail & Related papers (2023-07-27T14:38:32Z) - Learning Transferable Pedestrian Representation from Multimodal
Information Supervision [174.5150760804929]
VAL-PAT is a novel framework that learns transferable representations to enhance various pedestrian analysis tasks with multimodal information.
We first perform pre-training on LUPerson-TA dataset, where each image contains text and attribute annotations.
We then transfer the learned representations to various downstream tasks, including person reID, person attribute recognition and text-based person search.
arXiv Detail & Related papers (2023-04-12T01:20:58Z) - Unified Contrastive Learning in Image-Text-Label Space [130.31947133453406]
Unified Contrastive Learning (UniCL) is effective way of learning semantically rich yet discriminative representations.
UniCL stand-alone is a good learner on pure imagelabel data, rivaling the supervised learning methods across three image classification datasets.
arXiv Detail & Related papers (2022-04-07T17:34:51Z) - Unsupervised Object-Level Representation Learning from Scene Images [97.07686358706397]
Object-level Representation Learning (ORL) is a new self-supervised learning framework towards scene images.
Our key insight is to leverage image-level self-supervised pre-training as the prior to discover object-level semantic correspondence.
ORL significantly improves the performance of self-supervised learning on scene images, even surpassing supervised ImageNet pre-training on several downstream tasks.
arXiv Detail & Related papers (2021-06-22T17:51:24Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Curious Representation Learning for Embodied Intelligence [81.21764276106924]
Self-supervised representation learning has achieved remarkable success in recent years.
Yet to build truly intelligent agents, we must construct representation learning algorithms that can learn from environments.
We propose a framework, curious representation learning, which jointly learns a reinforcement learning policy and a visual representation model.
arXiv Detail & Related papers (2021-05-03T17:59:20Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.