Wheelchair Behavior Recognition for Visualizing Sidewalk Accessibility
by Deep Neural Networks
- URL: http://arxiv.org/abs/2101.03724v1
- Date: Mon, 11 Jan 2021 06:41:42 GMT
- Title: Wheelchair Behavior Recognition for Visualizing Sidewalk Accessibility
by Deep Neural Networks
- Authors: Takumi Watanabe, Hiroki Takahashi, Goh Sato, Yusuke Iwasawa, Yutaka
Matsuo, Ikuko Eguchi Yairi
- Abstract summary: This paper introduces our methodology to estimate sidewalk accessibilities from wheelchair behavior via a triaxial accelerometer in a smartphone installed under a wheelchair seat.
Our method recognizes sidewalk accessibilities from environmental factors, e.g. gradient, curbs, and gaps.
This paper developed and evaluated a prototype system that visualizes sidewalk accessibility information by extracting knowledge from wheelchair acceleration.
- Score: 19.671946716832203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces our methodology to estimate sidewalk accessibilities
from wheelchair behavior via a triaxial accelerometer in a smartphone installed
under a wheelchair seat. Our method recognizes sidewalk accessibilities from
environmental factors, e.g. gradient, curbs, and gaps, which influence
wheelchair bodies and become a burden for people with mobility difficulties.
This paper developed and evaluated a prototype system that visualizes sidewalk
accessibility information by extracting knowledge from wheelchair acceleration
using deep neural networks. Firstly, we created a supervised convolutional
neural network model to classify road surface conditions using wheelchair
acceleration data. Secondly, we applied a weakly supervised method to extract
representations of road surface conditions without manual annotations. Finally,
we developed a self-supervised variational autoencoder to assess sidewalk
barriers for wheelchair users. The results show that the proposed method
estimates sidewalk accessibilities from wheelchair accelerations and extracts
knowledge of accessibilities by weakly supervised and self-supervised
approaches.
Related papers
- Pedestrian motion prediction evaluation for urban autonomous driving [0.0]
We analyze selected publications with provided open-source solutions to determine valuability of traditional motion prediction metrics.
This perspective should be valuable to any potential autonomous driving or robotics engineer looking for the real-world performance of the existing state-of-art pedestrian motion prediction problem.
arXiv Detail & Related papers (2024-10-22T10:06:50Z) - WheelPoser: Sparse-IMU Based Body Pose Estimation for Wheelchair Users [7.5279679789210645]
We present WheelPoser, a real-time pose estimation system specifically designed for wheelchair users.
Our system uses only four strategically placed IMUs on the user's body and wheelchair, making it far more practical than prior systems using cameras and dense IMU arrays.
WheelPoser is able to track a wheelchair user's pose with a mean joint angle error of 14.30 degrees and a mean joint position error of 6.74 cm, more than three times better than similar systems using sparse IMUs.
arXiv Detail & Related papers (2024-09-13T02:41:49Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Monocular Vision-based Prediction of Cut-in Maneuvers with LSTM Networks [0.0]
This study proposes a method to predict potentially dangerous cut-in maneuvers happening in the ego lane.
We follow a computer vision-based approach that only employs a single in-vehicle RGB camera.
Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step.
arXiv Detail & Related papers (2022-03-21T02:30:36Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Driving-Signal Aware Full-Body Avatars [49.89791440532946]
We present a learning-based method for building driving-signal aware full-body avatars.
Our model is a conditional variational autoencoder that can be animated with incomplete driving signals.
We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence.
arXiv Detail & Related papers (2021-05-21T16:22:38Z) - Open Area Path Finding to Improve Wheelchair Navigation [0.0]
This paper proposes and implements a novel path finding algorithm for open areas with no network of pathways.
The proposed algorithm creates a new graph in the open area, which can consider the obstacles and barriers and calculate the path.
The implementations and tests show at least a 76.4% similarity between the proposed algorithm outputs and actual wheelchair users trajectories.
arXiv Detail & Related papers (2020-11-07T21:20:32Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z) - An Intelligent and Low-cost Eye-tracking System for Motorized Wheelchair
Control [3.3003775275716376]
The paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly.
The system input was images of the users eye that were processed to estimate the gaze direction and the wheelchair was moved accordingly.
arXiv Detail & Related papers (2020-05-02T23:08:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.