Knowledge Distillation for Feature Extraction in Underwater VSLAM
- URL: http://arxiv.org/abs/2303.17981v1
- Date: Fri, 31 Mar 2023 11:33:21 GMT
- Title: Knowledge Distillation for Feature Extraction in Underwater VSLAM
- Authors: Jinghe Yang, Mingming Gong, Girish Nair, Jung Hoon Lee, Jason Monty,
Ye Pu
- Abstract summary: This paper proposes a cross-modal knowledge distillation framework for training an underwater feature detection and matching network (UFEN)
In particular, we use in-air RGBD data to generate synthetic underwater images based on a physical underwater imaging formation model.
To test the effectiveness of our method, we built a new underwater dataset with groundtruth measurements named EASI.
- Score: 29.167521895895455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, learning-based feature detection and matching have
outperformed manually-designed methods in in-air cases. However, it is
challenging to learn the features in the underwater scenario due to the absence
of annotated underwater datasets. This paper proposes a cross-modal knowledge
distillation framework for training an underwater feature detection and
matching network (UFEN). In particular, we use in-air RGBD data to generate
synthetic underwater images based on a physical underwater imaging formation
model and employ these as the medium to distil knowledge from a teacher model
SuperPoint pretrained on in-air images. We embed UFEN into the ORB-SLAM3
framework to replace the ORB feature by introducing an additional binarization
layer. To test the effectiveness of our method, we built a new underwater
dataset with groundtruth measurements named EASI
(https://github.com/Jinghe-mel/UFEN-SLAM), recorded in an indoor water tank for
different turbidity levels. The experimental results on the existing dataset
and our new dataset demonstrate the effectiveness of our method.
Related papers
- FAFA: Frequency-Aware Flow-Aided Self-Supervision for Underwater Object Pose Estimation [65.01601309903971]
We introduce FAFA, a Frequency-Aware Flow-Aided self-supervised framework for 6D pose estimation of unmanned underwater vehicles (UUVs)
Our framework relies solely on the 3D model and RGB images, alleviating the need for any real pose annotations or other-modality data like depths.
We evaluate the effectiveness of FAFA on common underwater object pose benchmarks and showcase significant performance improvements compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-09-25T03:54:01Z) - UMono: Physical Model Informed Hybrid CNN-Transformer Framework for Underwater Monocular Depth Estimation [5.596432047035205]
Underwater monocular depth estimation serves as the foundation for tasks such as 3D reconstruction of underwater scenes.
Existing methods fail to consider the unique characteristics of underwater environments.
In this paper, an end-to-end learning framework for underwater monocular depth estimation called UMono is presented.
arXiv Detail & Related papers (2024-07-25T07:52:11Z) - Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset [60.14089302022989]
Underwater vision tasks often suffer from low segmentation accuracy due to the complex underwater circumstances.
We construct the first large-scale underwater salient instance segmentation dataset (USIS10K)
We propose an Underwater Salient Instance architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain.
arXiv Detail & Related papers (2024-06-10T06:17:33Z) - An Efficient Detection and Control System for Underwater Docking using
Machine Learning and Realistic Simulation: A Comprehensive Approach [5.039813366558306]
This work compares different deep-learning architectures to perform underwater docking detection and classification.
A Generative Adversarial Network (GAN) is used to do image-to-image translation, converting the Gazebo simulation image into an underwater-looking image.
Results show an improvement of 20% in the high turbidity scenarios regardless of the underwater currents.
arXiv Detail & Related papers (2023-11-02T18:10:20Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - DeepAqua: Self-Supervised Semantic Segmentation of Wetland Surface Water
Extent with SAR Images using Knowledge Distillation [44.99833362998488]
We present DeepAqua, a self-supervised deep learning model that eliminates the need for manual annotations during the training phase.
We exploit cases where optical- and radar-based water masks coincide, enabling the detection of both open and vegetated water surfaces.
Experimental results show that DeepAqua outperforms other unsupervised methods by improving accuracy by 7%, Intersection Over Union by 27%, and F1 score by 14%.
arXiv Detail & Related papers (2023-05-02T18:06:21Z) - MetaUE: Model-based Meta-learning for Underwater Image Enhancement [25.174894007563374]
This paper proposes a model-based deep learning method for restoring clean images under various underwater scenarios.
The meta-learning strategy is used to obtain a pre-trained model on the synthetic underwater dataset.
The model is then fine-tuned on real underwater datasets to obtain a reliable underwater image enhancement model, called MetaUE.
arXiv Detail & Related papers (2023-03-12T02:38:50Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.