Teach Me How to Learn: A Perspective Review towards User-centered
Neuro-symbolic Learning for Robotic Surgical Systems
- URL: http://arxiv.org/abs/2307.03853v1
- Date: Fri, 7 Jul 2023 21:58:28 GMT
- Title: Teach Me How to Learn: A Perspective Review towards User-centered
Neuro-symbolic Learning for Robotic Surgical Systems
- Authors: Amr Gomaa, Bilal Mahdy, Niko Kleer, Michael Feld, Frank Kirchner,
Antonio Kr\"uger
- Abstract summary: Recent advances in machine learning allowed robots to identify objects on a perceptual nonsymbolic level.
An alternative solution is to teach a robot on both perceptual nonsymbolic and conceptual symbolic levels.
This work proposes a concept for this user-centered hybrid learning paradigm that focuses on robotic surgical situations.
- Score: 3.5672486441844553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in machine learning models allowed robots to identify objects
on a perceptual nonsymbolic level (e.g., through sensor fusion and natural
language understanding). However, these primarily black-box learning models
still lack interpretation and transferability and require high data and
computational demand. An alternative solution is to teach a robot on both
perceptual nonsymbolic and conceptual symbolic levels through hybrid
neurosymbolic learning approaches with expert feedback (i.e., human-in-the-loop
learning). This work proposes a concept for this user-centered hybrid learning
paradigm that focuses on robotic surgical situations. While most recent
research focused on hybrid learning for non-robotic and some generic robotic
domains, little work focuses on surgical robotics. We survey this related
research while focusing on human-in-the-loop surgical robotic systems. This
evaluation highlights the most prominent solutions for autonomous surgical
robots and the challenges surgeons face when interacting with these systems.
Finally, we envision possible ways to address these challenges using online
apprenticeship learning based on implicit and explicit feedback from expert
surgeons.
Related papers
- What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Human-in-the-loop Embodied Intelligence with Interactive Simulation
Environment for Surgical Robot Learning [19.390115282150337]
We study human-in-the-loop embodied intelligence with a new interactive simulation platform for surgical robot learning.
Specifically, we establish our platform based on our previously released SurRoL simulator with several new features.
We showcase the improvement of our simulation environment with the designed new features, and validate effectiveness of incorporating human factors in embodied intelligence.
arXiv Detail & Related papers (2023-01-01T18:05:25Z) - Dual-Arm Adversarial Robot Learning [0.6091702876917281]
We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
arXiv Detail & Related papers (2021-10-15T12:51:57Z) - Neuroscience-inspired perception-action in robotics: applying active
inference for state estimation, control and self-perception [2.1067139116005595]
We discuss how neuroscience findings open up opportunities to improve current estimation and control algorithms in robotics.
This paper summarizes some experiments and lessons learned from developing such a computational model on real embodied platforms.
arXiv Detail & Related papers (2021-05-10T10:59:38Z) - Embedded Computer Vision System Applied to a Four-Legged Line Follower
Robot [0.0]
This project aims to drive a robot using an automated computer vision embedded system, connecting the robot's vision to its behavior.
The robot is applied on a typical mobile robot's issue: line following.
Decision making of where to move next is based on the line center of the path and is fully automated.
arXiv Detail & Related papers (2021-01-12T23:52:53Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z) - A Survey of Behavior Learning Applications in Robotics -- State of the Art and Perspectives [44.45953630612019]
Recent success of machine learning in many domains has been overwhelming.
We will give a broad overview of behaviors that have been learned and used on real robots.
arXiv Detail & Related papers (2019-06-05T07:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.