A Reinforcement Learning Approach for Sequential Spatial Transformer
Networks
- URL: http://arxiv.org/abs/2106.14295v1
- Date: Sun, 27 Jun 2021 17:41:17 GMT
- Title: A Reinforcement Learning Approach for Sequential Spatial Transformer
Networks
- Authors: Fatemeh Azimi, Federico Raue, Joern Hees, Andreas Dengel
- Abstract summary: We formulate the task as a Markovian Decision Process (MDP) and use RL to solve this sequential decision-making problem.
In our method, we are not bound to the differentiability of the sampling modules.
We design multiple experiments to verify the effectiveness of our method using cluttered MNIST and Fashion-MNIST datasets.
- Score: 6.585049648605185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spatial Transformer Networks (STN) can generate geometric transformations
which modify input images to improve the classifier's performance. In this
work, we combine the idea of STN with Reinforcement Learning (RL). To this end,
we break the affine transformation down into a sequence of simple and discrete
transformations. We formulate the task as a Markovian Decision Process (MDP)
and use RL to solve this sequential decision-making problem. STN architectures
learn the transformation parameters by minimizing the classification error and
backpropagating the gradients through a sub-differentiable sampling module. In
our method, we are not bound to the differentiability of the sampling modules.
Moreover, we have freedom in designing the objective rather than only
minimizing the error; e.g., we can directly set the target as maximizing the
accuracy. We design multiple experiments to verify the effectiveness of our
method using cluttered MNIST and Fashion-MNIST datasets and show that our
method outperforms STN with a proper definition of MDP components.
Related papers
- Accelerating Toeplitz Neural Network with Constant-time Inference
Complexity [21.88774274472737]
Toeplitz Neural Networks (TNNs) have exhibited outstanding performance in various sequence modeling tasks.
They outperform commonly used Transformer-based models while benefiting from log-linear space-time complexities.
In this paper, we aim to combine the strengths of TNNs and State Space Models (SSMs) by converting TNNs to SSMs during inference.
arXiv Detail & Related papers (2023-11-15T07:50:57Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - A Simple Strategy to Provable Invariance via Orbit Mapping [14.127786615513978]
We propose a method to make network architectures provably invariant with respect to group actions.
In a nutshell, we intend to 'undo' any possible transformation before feeding the data into the actual network.
arXiv Detail & Related papers (2022-09-24T03:40:42Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Use of Deterministic Transforms to Design Weight Matrices of a Neural
Network [14.363218103948782]
Self size-estimating feedforward network (SSFN) is a feedforward multilayer network.
In this article, the use of deterministic transforms instead of random matrix instances is explored.
The effectiveness of the proposed approach vis-a-vis the SSFN is illustrated for object classification tasks using several benchmark datasets.
arXiv Detail & Related papers (2021-10-06T10:21:24Z) - The Common Intuition to Transfer Learning Can Win or Lose: Case Studies for Linear Regression [26.5147705530439]
We define a transfer learning approach to the target task as a linear regression optimization with a regularization on the distance between the to-be-learned target parameters and the already-learned source parameters.
We show that for sufficiently related tasks, the optimally tuned transfer learning approach can outperform the optimally tuned ridge regression method.
arXiv Detail & Related papers (2021-03-09T18:46:01Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - FDA: Fourier Domain Adaptation for Semantic Segmentation [82.4963423086097]
We describe a simple method for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other.
We illustrate the method in semantic segmentation, where densely annotated images are aplenty in one domain, but difficult to obtain in another.
Our results indicate that even simple procedures can discount nuisance variability in the data that more sophisticated methods struggle to learn away.
arXiv Detail & Related papers (2020-04-11T22:20:48Z) - Plannable Approximations to MDP Homomorphisms: Equivariance under
Actions [72.30921397899684]
We introduce a contrastive loss function that enforces action equivariance on the learned representations.
We prove that when our loss is zero, we have a homomorphism of a deterministic Markov Decision Process.
We show experimentally that for deterministic MDPs, the optimal policy in the abstract MDP can be successfully lifted to the original MDP.
arXiv Detail & Related papers (2020-02-27T08:29:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.