Relevance for Human Robot Collaboration
- URL: http://arxiv.org/abs/2409.07753v2
- Date: Sat, 12 Oct 2024 20:19:12 GMT
- Title: Relevance for Human Robot Collaboration
- Authors: Xiaotong Zhang, Dingcheng Huang, Kamal Youcef-Toumi,
- Abstract summary: This paper introduces a novel concept and scene-understanding approach termed relevance'
To accurately and efficiently quantify relevance, we developed an event-based framework that selectively triggers relevance determination.
A real-world demonstration showcases the relevance framework's ability to intelligently assist humans in everyday tasks.
- Score: 6.009969292588733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective human-robot collaboration (HRC) requires the robots to possess human-like intelligence. Inspired by the human's cognitive ability to selectively process and filter elements in complex environments, this paper introduces a novel concept and scene-understanding approach termed `relevance.' It identifies relevant components in a scene. To accurately and efficiently quantify relevance, we developed an event-based framework that selectively triggers relevance determination, along with a probabilistic methodology built on a structured scene representation. Simulation results demonstrate that the relevance framework and methodology accurately predict the relevance of a general HRC setup, achieving a precision of 0.99 and a recall of 0.94. Relevance can be broadly applied to several areas in HRC to improve task planning time by 79.56% compared with pure planning for a cereal task, reduce perception latency by up to 26.53% for an object detector, improve HRC safety by up to 13.50% and reduce the number of inquiries for HRC by 80.84%. A real-world demonstration showcases the relevance framework's ability to intelligently assist humans in everyday tasks.
Related papers
- Advancing Embodied Agent Security: From Safety Benchmarks to Input Moderation [52.83870601473094]
Embodied agents exhibit immense potential across a multitude of domains.
Existing research predominantly concentrates on the security of general large language models.
This paper introduces a novel input moderation framework, meticulously designed to safeguard embodied agents.
arXiv Detail & Related papers (2025-04-22T08:34:35Z) - Uncertainty Aware Human-machine Collaboration in Camouflaged Object Detection [12.2304109417748]
A key step toward developing trustworthy COD systems is the estimation and effective utilization of uncertainty.
In this work, we propose a human-machine collaboration framework for classifying the presence of camouflaged objects.
Our approach introduces a multiview backbone to estimate uncertainty in CV model predictions, utilizes this uncertainty during training to improve efficiency, and defers low-confidence cases to human evaluation.
arXiv Detail & Related papers (2025-02-12T13:05:24Z) - PARTNR: A Benchmark for Planning and Reasoning in Embodied Multi-agent Tasks [57.89516354418451]
We present a benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration (PARTNR)
We employ a semi-automated task generation pipeline using Large Language Models (LLMs)
We analyze state-of-the-art LLMs on PARTNR tasks, across the axes of planning, perception and skill execution.
arXiv Detail & Related papers (2024-10-31T17:53:12Z) - Potential Field as Scene Affordance for Behavior Change-Based Visual Risk Object Identification [4.896236083290351]
We study behavior change-based visual risk object identification (Visual-ROI)
Existing methods often show significant limitations in spatial accuracy and temporal consistency.
We propose a new framework with a Bird's Eye View representation to overcome these challenges.
arXiv Detail & Related papers (2024-09-24T08:17:50Z) - Relevance-driven Decision Making for Safer and More Efficient Human Robot Collaboration [6.009969292588733]
We introduced a novel concept termed relevance for Human-Robot Collaboration (HRC)
Relevance is defined as the importance of the objects based on the applicability and pertinence of the objects for the human objective or other factors.
We developed a novel two-loop framework integrating real-time and asynchronous processing to quantify relevance and apply relevance for safer and more efficient HRC.
arXiv Detail & Related papers (2024-09-21T03:20:53Z) - Bi-Factorial Preference Optimization: Balancing Safety-Helpfulness in Language Models [94.39278422567955]
Fine-tuning large language models (LLMs) on human preferences has proven successful in enhancing their capabilities.
However, ensuring the safety of LLMs during fine-tuning remains a critical concern.
We propose a supervised learning framework called Bi-Factorial Preference Optimization (BFPO) to address this issue.
arXiv Detail & Related papers (2024-08-27T17:31:21Z) - Offline Risk-sensitive RL with Partial Observability to Enhance
Performance in Human-Robot Teaming [1.3980986259786223]
We propose a method to incorporate model uncertainty, thus enabling risk-sensitive sequential decision-making.
Experiments were conducted with a group of twenty-six human participants within a simulated robot teleoperation environment.
arXiv Detail & Related papers (2024-02-08T14:27:34Z) - Provably Efficient Iterated CVaR Reinforcement Learning with Function
Approximation and Human Feedback [57.6775169085215]
Risk-sensitive reinforcement learning aims to optimize policies that balance the expected reward and risk.
We present a novel framework that employs an Iterated Conditional Value-at-Risk (CVaR) objective under both linear and general function approximations.
We propose provably sample-efficient algorithms for this Iterated CVaR RL and provide rigorous theoretical analysis.
arXiv Detail & Related papers (2023-07-06T08:14:54Z) - Rearrange Indoor Scenes for Human-Robot Co-Activity [82.22847163761969]
We present an optimization-based framework for rearranging indoor furniture to accommodate human-robot co-activities better.
Our algorithm preserves the functional relations among furniture by integrating spatial and semantic co-occurrence extracted from SUNCG and ConceptNet.
Our experiments show that rearranged scenes provide an average of 14% more accessible space and 30% more objects to interact with.
arXiv Detail & Related papers (2023-03-10T03:03:32Z) - Intuitive and Efficient Human-robot Collaboration via Real-time
Approximate Bayesian Inference [4.310882094628194]
Collaborative robots and end-to-end AI, promises flexible automation of human tasks in factories and warehouses.
Humans and cobots will collaborate helping each other.
For these collaborations to be effective and safe, robots need to model, predict and exploit human's intents.
arXiv Detail & Related papers (2022-05-17T23:04:44Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Self-supervised Representation Learning with Relative Predictive Coding [102.93854542031396]
Relative Predictive Coding (RPC) is a new contrastive representation learning objective.
RPC maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance.
We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks.
arXiv Detail & Related papers (2021-03-21T01:04:24Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via
Latent Model Ensembles [73.15950858151594]
This paper presents Latent Optimistic Value Exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards.
We combine latent world models with value function estimation to predict infinite-horizon returns and recover associated uncertainty via ensembling.
We apply LOVE to visual robot control tasks in continuous action spaces and demonstrate on average more than 20% improved sample efficiency in comparison to state-of-the-art and other exploration objectives.
arXiv Detail & Related papers (2020-10-27T22:06:57Z) - Human-Robot Team Coordination with Dynamic and Latent Human Task
Proficiencies: Scheduling with Learning Curves [0.0]
We introduce a novel resource coordination that enables robots to explore the relative strengths and learning abilities of their human teammates.
We generate and evaluate a robust schedule while discovering the latest individual worker proficiency.
Results indicate that scheduling strategies favoring exploration tend to be beneficial for human-robot collaboration.
arXiv Detail & Related papers (2020-07-03T19:44:22Z) - Attention-Oriented Action Recognition for Real-Time Human-Robot
Interaction [11.285529781751984]
We propose an attention-oriented multi-level network framework to meet the need for real-time interaction.
Specifically, a Pre-Attention network is employed to roughly focus on the interactor in the scene at low resolution.
The other compact CNN receives the extracted skeleton sequence as input for action recognition.
arXiv Detail & Related papers (2020-07-02T12:41:28Z) - Uncertainty Quantification for Deep Context-Aware Mobile Activity
Recognition and Unknown Context Discovery [85.36948722680822]
We develop a context-aware mixture of deep models termed the alpha-beta network.
We improve accuracy and F score by 10% by identifying high-level contexts.
In order to ensure training stability, we have used a clustering-based pre-training in both public and in-house datasets.
arXiv Detail & Related papers (2020-03-03T19:35:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.