[Re] Differentiable Spatial Planning using Transformers
- URL: http://arxiv.org/abs/2208.09536v1
- Date: Fri, 19 Aug 2022 20:14:29 GMT
- Title: [Re] Differentiable Spatial Planning using Transformers
- Authors: Rohit Ranjan, Himadri Bhakta, Animesh Jha, Parv Maheshwari, Debashish
Chakravarty
- Abstract summary: The problem of spatial path planning in a differentiable way is considered.
They show that their proposed method of using Spatial Planning Transformers outperforms prior data-driven models.
We verify these claims by reproducing their experiments and testing their method on new data.
- Score: 0.6562256987706128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This report covers our reproduction effort of the paper 'Differentiable
Spatial Planning using Transformers' by Chaplot et al. . In this paper, the
problem of spatial path planning in a differentiable way is considered. They
show that their proposed method of using Spatial Planning Transformers
outperforms prior data-driven models and leverages differentiable structures to
learn mapping without a ground truth map simultaneously. We verify these claims
by reproducing their experiments and testing their method on new data. We also
investigate the stability of planning accuracy with maps with increased
obstacle complexity. Efforts to investigate and verify the learnings of the
Mapper module were met with failure stemming from a paucity of computational
resources and unreachable authors.
Related papers
- Learning to Plan and Generate Text with Citations [69.56850173097116]
We explore the attribution capabilities of plan-based models which have been recently shown to improve the faithfulness, grounding, and controllability of generated text.
We propose two attribution models that utilize different variants of blueprints, an abstractive model where questions are generated from scratch, and an extractive model where questions are copied from the input.
arXiv Detail & Related papers (2024-04-04T11:27:54Z) - Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference [53.419249906014194]
We study generative modeling for planning with datasets repurposed from offline reinforcement learning.
We introduce the Latent Plan Transformer (), a novel model that leverages a latent variable to connect a Transformer-based trajectory generator and the final return.
arXiv Detail & Related papers (2024-02-07T08:18:09Z) - MixTEA: Semi-supervised Entity Alignment with Mixture Teaching [13.340670739259455]
Semi-supervised entity alignment (EA) is a practical and challenging task because of the lack of adequate labeled mappings as training data.
We propose a novel MixTEA method, which guides the model learning with an end-to-end mixture teaching of manually labeled mappings and probabilistic pseudo mappings.
arXiv Detail & Related papers (2023-11-08T03:49:23Z) - Transformer-Based Neural Surrogate for Link-Level Path Loss Prediction
from Variable-Sized Maps [11.327456466796681]
Estimating path loss for a transmitter-receiver location is key to many use-cases including network planning and handover.
We present a transformer-based neural network architecture that enables predicting link-level properties from maps of various dimensions and from sparse measurements.
arXiv Detail & Related papers (2023-10-06T20:17:40Z) - Robust Self-Supervised LiDAR Odometry via Representative Structure
Discovery and 3D Inherent Error Modeling [67.75095378830694]
We develop a two-stage odometry estimation network, where we obtain the ego-motion by estimating a set of sub-region transformations.
In this paper, we aim to alleviate the influence of unreliable structures in training, inference and mapping phases.
Our two-frame odometry outperforms the previous state of the arts by 16%/12% in terms of translational/rotational errors.
arXiv Detail & Related papers (2022-02-27T12:52:27Z) - Smoothed Embeddings for Certified Few-Shot Learning [63.68667303948808]
We extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.
Our results are confirmed by experiments on different datasets.
arXiv Detail & Related papers (2022-02-02T18:19:04Z) - Differentiable Spatial Planning using Transformers [87.90709874369192]
We propose Spatial Planning Transformers (SPT), which given an obstacle map learns to generate actions by planning over long-range spatial dependencies.
In the setting where the ground truth map is not known to the agent, we leverage pre-trained SPTs in an end-to-end framework.
SPTs outperform prior state-of-the-art differentiable planners across all the setups for both manipulation and navigation tasks.
arXiv Detail & Related papers (2021-12-02T06:48:16Z) - Transformer-based Map Matching Model with Limited Ground-Truth Data
using Transfer-Learning Approach [6.510061176722248]
In many trajectory-based applications, it is necessary to map raw GPS trajectories onto road networks in digital maps.
In this paper, we consider the map-matching task from the data perspective, proposing a deep learning-based map-matching model.
We generate synthetic trajectory data to pre-train the Transformer model and then fine-tune the model with a limited number of ground-truth data.
arXiv Detail & Related papers (2021-08-01T11:51:11Z) - Motion Planning Transformers: One Model to Plan Them All [15.82728888674882]
We propose a transformer-based approach for efficiently solving the complex motion planning problems.
Our approach first identifies regions on the map using transformers to provide attention to map areas likely to include the best path, and then applies local planners to generate the final collision-free path.
arXiv Detail & Related papers (2021-06-05T04:29:16Z) - Augmented Parallel-Pyramid Net for Attention Guided Pose-Estimation [90.28365183660438]
This paper proposes an augmented parallel-pyramid net with attention partial module and differentiable auto-data augmentation.
We define a new pose search space where the sequences of data augmentations are formulated as a trainable and operational CNN component.
Notably, our method achieves the top-1 accuracy on the challenging COCO keypoint benchmark and the state-of-the-art results on the MPII datasets.
arXiv Detail & Related papers (2020-03-17T03:52:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.