Learn to Differ: Sim2Real Small Defection Segmentation Network
- URL: http://arxiv.org/abs/2103.04297v1
- Date: Sun, 7 Mar 2021 08:25:56 GMT
- Title: Learn to Differ: Sim2Real Small Defection Segmentation Network
- Authors: Zexi Chen, Zheyuan Huang, Yunkai Wang, Xuecheng Xu, Yue Wang, Rong
Xiong
- Abstract summary: Small defection segmentation approaches are trained in specific settings and tend to be limited by fixed context.
We propose the network SSDS that learns a way of distinguishing small defections between two images regardless of the context.
- Score: 8.488353860049898
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies on deep-learning-based small defection segmentation approaches
are trained in specific settings and tend to be limited by fixed context.
Throughout the training, the network inevitably learns the representation of
the background of the training data before figuring out the defection. They
underperform in the inference stage once the context changed and can only be
solved by training in every new setting. This eventually leads to the
limitation in practical robotic applications where contexts keep varying. To
cope with this, instead of training a network context by context and hoping it
to generalize, why not stop misleading it with any limited context and start
training it with pure simulation? In this paper, we propose the network SSDS
that learns a way of distinguishing small defections between two images
regardless of the context, so that the network can be trained once for all. A
small defection detection layer utilizing the pose sensitivity of phase
correlation between images is introduced and is followed by an outlier masking
layer. The network is trained on randomly generated simulated data with simple
shapes and is generalized across the real world. Finally, SSDS is validated on
real-world collected data and demonstrates the ability that even when trained
in cheap simulation, SSDS can still find small defections in the real world
showing the effectiveness and its potential for practical applications.
Related papers
- Convolutional Networks as Extremely Small Foundation Models: Visual Prompting and Theoretical Perspective [1.79487674052027]
In this paper, we design a prompting module which performs few-shot adaptation of generic deep networks to new tasks.
Driven by learning theory, we derive prompting modules that are as simple as possible, as they generalize better under the same training error.
In practice, SDForest has extremely low cost and achieves real-time even on CPU.
arXiv Detail & Related papers (2024-09-03T12:34:23Z) - A simple theory for training response of deep neural networks [0.0]
Deep neural networks give us a powerful method to model the training dataset's relationship between input and output.
We show the training response consists of some different factors based on training stages, activation functions, or training methods.
In addition, we show feature space reduction as an effect of training dynamics, which can result in network fragility.
arXiv Detail & Related papers (2024-05-07T07:20:15Z) - RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning [68.42776779425978]
We show that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms.
We then train a simple linear classifier on top without storing any exemplars, processing one sample at a time in an online continual learning setting.
Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios.
arXiv Detail & Related papers (2024-02-13T22:07:29Z) - Relearning Forgotten Knowledge: on Forgetting, Overfit and Training-Free
Ensembles of DNNs [9.010643838773477]
We introduce a novel score for quantifying overfit, which monitors the forgetting rate of deep models on validation data.
We show that overfit can occur with and without a decrease in validation accuracy, and may be more common than previously appreciated.
We use our observations to construct a new ensemble method, based solely on the training history of a single network, which provides significant improvement without any additional cost in training time.
arXiv Detail & Related papers (2023-10-17T09:22:22Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - Targeted Gradient Descent: A Novel Method for Convolutional Neural
Networks Fine-tuning and Online-learning [9.011106198253053]
A convolutional neural network (ConvNet) is usually trained and then tested using images drawn from the same distribution.
To generalize a ConvNet to various tasks often requires a complete training dataset that consists of images drawn from different tasks.
We present Targeted Gradient Descent (TGD), a novel fine-tuning method that can extend a pre-trained network to a new task without revisiting data from the previous task.
arXiv Detail & Related papers (2021-09-29T21:22:09Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face
Learning [54.13876727413492]
In many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID.
With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a a long-tail face learning.
Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST)
MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of
arXiv Detail & Related papers (2021-05-10T04:57:32Z) - Fully Convolutional Networks for Continuous Sign Language Recognition [83.85895472824221]
Continuous sign language recognition is a challenging task that requires learning on both spatial and temporal dimensions.
We propose a fully convolutional network (FCN) for online SLR to concurrently learn spatial and temporal features from weakly annotated video sequences.
arXiv Detail & Related papers (2020-07-24T08:16:37Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.