ROIAL: Region of Interest Active Learning for Characterizing Exoskeleton
Gait Preference Landscapes
- URL: http://arxiv.org/abs/2011.04812v2
- Date: Tue, 30 Mar 2021 22:59:19 GMT
- Title: ROIAL: Region of Interest Active Learning for Characterizing Exoskeleton
Gait Preference Landscapes
- Authors: Kejun Li, Maegan Tucker, Erdem B{\i}y{\i}k, Ellen Novoseller, Joel W.
Burdick, Yanan Sui, Dorsa Sadigh, Yisong Yue, Aaron D. Ames
- Abstract summary: Region of Interest Active Learning (ROIAL) framework actively learns each user's underlying utility function over a region of interest.
ROIAL learns from ordinal and preference feedback, which are more reliable feedback mechanisms than absolute numerical scores.
Results demonstrate the feasibility of recovering gait utility landscapes from limited human trials.
- Score: 64.87637128500889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Characterizing what types of exoskeleton gaits are comfortable for users, and
understanding the science of walking more generally, require recovering a
user's utility landscape. Learning these landscapes is challenging, as walking
trajectories are defined by numerous gait parameters, data collection from
human trials is expensive, and user safety and comfort must be ensured. This
work proposes the Region of Interest Active Learning (ROIAL) framework, which
actively learns each user's underlying utility function over a region of
interest that ensures safety and comfort. ROIAL learns from ordinal and
preference feedback, which are more reliable feedback mechanisms than absolute
numerical scores. The algorithm's performance is evaluated both in simulation
and experimentally for three non-disabled subjects walking inside of a
lower-body exoskeleton. ROIAL learns Bayesian posteriors that predict each
exoskeleton user's utility landscape across four exoskeleton gait parameters.
The algorithm discovers both commonalities and discrepancies across users' gait
preferences and identifies the gait parameters that most influenced user
feedback. These results demonstrate the feasibility of recovering gait utility
landscapes from limited human trials.
Related papers
- Adaptive Language-Guided Abstraction from Contrastive Explanations [53.48583372522492]
It is necessary to determine which features of the environment are relevant before determining how these features should be used to compute reward.
End-to-end methods for joint feature and reward learning often yield brittle reward functions that are sensitive to spurious state features.
This paper describes a method named ALGAE which alternates between using language models to iteratively identify human-meaningful features.
arXiv Detail & Related papers (2024-09-12T16:51:58Z) - SkeleTR: Towrads Skeleton-based Action Recognition in the Wild [86.03082891242698]
SkeleTR is a new framework for skeleton-based action recognition.
It first models the intra-person skeleton dynamics for each skeleton sequence with graph convolutions.
It then uses stacked Transformer encoders to capture person interactions that are important for action recognition in general scenarios.
arXiv Detail & Related papers (2023-09-20T16:22:33Z) - Skeleton-Based Mutually Assisted Interacted Object Localization and
Human Action Recognition [111.87412719773889]
We propose a joint learning framework for "interacted object localization" and "human action recognition" based on skeleton data.
Our method achieves the best or competitive performance with the state-of-the-art methods for human action recognition.
arXiv Detail & Related papers (2021-10-28T10:09:34Z) - Learning Reward Functions from Scale Feedback [11.941038991430837]
A common framework is to iteratively query the user about which of two presented robot trajectories they prefer.
We propose scale feedback, where the user utilizes a slider to give more nuanced information.
We demonstrate the performance benefit of slider feedback in simulations, and validate our approach in two user studies.
arXiv Detail & Related papers (2021-10-01T09:45:18Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - User Role Discovery and Optimization Method based on K-means +
Reinforcement learning in Mobile Applications [0.3655021726150368]
Long term stable, and a set of user shared features can be abstracted as user roles.
The role is closely related to the user's social background, occupation, and living habits.
arXiv Detail & Related papers (2021-07-02T06:40:12Z) - Exoskeleton-Based Multimodal Action and Movement Recognition:
Identifying and Developing the Optimal Boosted Learning Approach [0.0]
This paper makes two scientific contributions to the field of exoskeleton-based action and movement recognition.
It presents a novel machine learning and pattern recognition-based framework that can detect a wide range of actions and movements.
arXiv Detail & Related papers (2021-06-18T19:43:54Z) - 3D Convolution Neural Network based Person Identification using Gait
cycles [0.0]
In this work, gait features are used to identify an individual. The steps involve object detection, background subtraction, silhouettes extraction, skeletonization, and training 3D Convolution Neural Network on these gait features.
The proposed method focuses more on the lower body part to extract features such as the angle between knee and thighs, hip angle, angle of contact, and many other features.
arXiv Detail & Related papers (2021-06-06T14:27:06Z) - Human Preference-Based Learning for High-dimensional Optimization of
Exoskeleton Walking Gaits [55.59198568303196]
This work presents LineCoSpar, a human-in-the-loop preference-based framework to learn user preferences in high dimensions.
In simulations and human trials, we empirically verify that LineCoSpar is a sample-efficient approach for high-dimensional preference optimization.
This result has implications for exoskeleton gait synthesis, an active field with applications to clinical use and patient rehabilitation.
arXiv Detail & Related papers (2020-03-13T22:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.