Domain Adaptor Networks for Hyperspectral Image Recognition
- URL: http://arxiv.org/abs/2108.01555v1
- Date: Tue, 3 Aug 2021 15:06:39 GMT
- Title: Domain Adaptor Networks for Hyperspectral Image Recognition
- Authors: Gustavo Perez and Subhransu Maji
- Abstract summary: We consider the problem of adapting a network trained on three-channel color images to a hyperspectral domain with a large number of channels.
We propose domain adaptor networks that map the input to be compatible with a network trained on large-scale color image datasets such as ImageNet.
- Score: 35.95313368586933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of adapting a network trained on three-channel color
images to a hyperspectral domain with a large number of channels. To this end,
we propose domain adaptor networks that map the input to be compatible with a
network trained on large-scale color image datasets such as ImageNet. Adaptors
enable learning on small hyperspectral datasets where training a network from
scratch may not be effective. We investigate architectures and strategies for
training adaptors and evaluate them on a benchmark consisting of multiple
hyperspectral datasets. We find that simple schemes such as linear projection
or subset selection are often the most effective, but can lead to a loss in
performance in some cases. We also propose a novel multi-view adaptor where of
the inputs are combined in an intermediate layer of the network in an order
invariant manner that provides further improvements. We present extensive
experiments by varying the number of training examples in the benchmark to
characterize the accuracy and computational trade-offs offered by these
adaptors.
Related papers
- Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - Rapid Network Adaptation: Learning to Adapt Neural Networks Using
Test-Time Feedback [12.946419909506883]
We create a closed-loop system that makes use of a test-time feedback signal to adapt a network on the fly.
We show that this loop can be effectively implemented using a learning-based function, which realizes an amortized for the network.
This leads to an adaptation method, named Rapid Network Adaptation (RNA), that is notably more flexible and orders of magnitude faster than the baselines.
arXiv Detail & Related papers (2023-09-27T16:20:39Z) - Multi-Domain Learning with Modulation Adapters [33.54630534228469]
Multi-domain learning aims to handle related tasks, such as image classification across multiple domains, simultaneously.
Modulation Adapters update the convolutional weights of the model in a multiplicative manner for each task.
Our approach yields excellent results, with accuracies that are comparable to or better than those of existing state-of-the-art approaches.
arXiv Detail & Related papers (2023-07-17T14:40:16Z) - Multi-Representation Adaptation Network for Cross-domain Image
Classification [20.615155915233693]
In image classification, it is often expensive and time-consuming to acquire sufficient labels.
Existing approaches mainly align the distributions of representations extracted by a single structure.
We propose Multi-Representation Adaptation which can dramatically improve the classification accuracy for cross-domain image classification.
arXiv Detail & Related papers (2022-01-04T06:34:48Z) - Leveraging Image Complexity in Macro-Level Neural Network Design for
Medical Image Segmentation [3.974175960216864]
We show that image complexity can be used as a guideline in choosing what is best for a given dataset.
For high-complexity datasets, a shallow network running on the original images may yield better segmentation results than a deep network running on downsampled images.
arXiv Detail & Related papers (2021-12-21T09:49:47Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - A3D: Adaptive 3D Networks for Video Action Recognition [17.118351068420086]
A3D is an adaptive 3D network that can infer at a wide range of computational one-time training.
It generates good constraints with trading off between network width andtemporal resolution.
Even under the same computational constraints, performance of our adaptive networks can be significantly boosted.
arXiv Detail & Related papers (2020-11-24T21:01:11Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Shape Adaptor: A Learnable Resizing Module [59.940372879848624]
We present a novel resizing module for neural networks: shape adaptor, a drop-in enhancement built on top of traditional resizing layers.
Our implementation enables shape adaptors to be trained end-to-end without any additional supervision.
We show the effectiveness of shape adaptors on two other applications: network compression and transfer learning.
arXiv Detail & Related papers (2020-08-03T14:15:52Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.