RCA: Ride Comfort-Aware Visual Navigation via Self-Supervised Learning
- URL: http://arxiv.org/abs/2207.14460v1
- Date: Fri, 29 Jul 2022 03:38:41 GMT
- Title: RCA: Ride Comfort-Aware Visual Navigation via Self-Supervised Learning
- Authors: Xinjie Yao, Ji Zhang, Jean Oh
- Abstract summary: Under shared autonomy, wheelchair users expect vehicles to provide safe and comfortable rides while following users high-level navigation plans.
We propose to model ride comfort explicitly in traversability analysis using proprioceptive sensing.
We show our navigation system provides human-preferred ride comfort through robot experiments together with a human evaluation study.
- Score: 14.798955901284847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Under shared autonomy, wheelchair users expect vehicles to provide safe and
comfortable rides while following users high-level navigation plans. To find
such a path, vehicles negotiate with different terrains and assess their
traversal difficulty. Most prior works model surroundings either through
geometric representations or semantic classifications, which do not reflect
perceived motion intensity and ride comfort in downstream navigation tasks. We
propose to model ride comfort explicitly in traversability analysis using
proprioceptive sensing. We develop a self-supervised learning framework to
predict traversability costmap from first-person-view images by leveraging
vehicle states as training signals. Our approach estimates how the vehicle
would feel if traversing over based on terrain appearances. We then show our
navigation system provides human-preferred ride comfort through robot
experiments together with a human evaluation study.
Related papers
- Learning Navigational Visual Representations with Semantic Map
Supervision [85.91625020847358]
We propose a navigational-specific visual representation learning method by contrasting the agent's egocentric views and semantic maps.
Ego$2$-Map learning transfers the compact and rich information from a map, such as objects, structure and transition, to the agent's egocentric representations for navigation.
arXiv Detail & Related papers (2023-07-23T14:01:05Z) - ScaTE: A Scalable Framework for Self-Supervised Traversability
Estimation in Unstructured Environments [7.226357394861987]
In this work, we introduce a scalable framework for learning self-supervised traversability.
We train a neural network that predicts the proprioceptive experience that a vehicle would undergo from 3D point clouds.
With driving data of various vehicles gathered from simulation and the real world, we show that our framework is capable of learning the self-supervised traversability of various vehicles.
arXiv Detail & Related papers (2022-09-14T09:52:26Z) - Human-Vehicle Cooperative Visual Perception for Shared Autonomous
Driving [9.537146822132904]
This paper proposes a human-vehicle cooperative visual perception method to enhance the visual perception ability of shared autonomous driving.
Based on transfer learning, the mAP of object detection reaches 75.52% and lays a solid foundation for visual fusion.
This study pioneers a cooperative visual perception solution for shared autonomous driving and experiments in real-world complex traffic conflict scenarios.
arXiv Detail & Related papers (2021-12-17T03:17:05Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z) - Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry [55.11913183006984]
We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
arXiv Detail & Related papers (2021-03-20T16:29:01Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.