A Data-driven Human Responsibility Management System
- URL: http://arxiv.org/abs/2012.03190v1
- Date: Sun, 6 Dec 2020 06:16:51 GMT
- Title: A Data-driven Human Responsibility Management System
- Authors: Xuejiao Tang, Jiong Qiu, Ruijun Chen, Wenbin Zhang, Vasileios
Iosifidis, Zhen Liu, Wei Meng, Mingli Zhang and Ji Zhang
- Abstract summary: An ideal safe workplace is described as a place where staffs fulfill responsibilities in a well-organized order.
occupational-related death and injury are still increasing and have been highly attended in the last decades due to the lack of comprehensive safety management.
A smart safety management system is therefore urgently needed, in which the staffs are instructed to fulfill responsibilities.
- Score: 11.650998188402209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An ideal safe workplace is described as a place where staffs fulfill
responsibilities in a well-organized order, potential hazardous events are
being monitored in real-time, as well as the number of accidents and relevant
damages are minimized. However, occupational-related death and injury are still
increasing and have been highly attended in the last decades due to the lack of
comprehensive safety management. A smart safety management system is therefore
urgently needed, in which the staffs are instructed to fulfill responsibilities
as well as automating risk evaluations and alerting staffs and departments when
needed. In this paper, a smart system for safety management in the workplace
based on responsibility big data analysis and the internet of things (IoT) are
proposed. The real world implementation and assessment demonstrate that the
proposed systems have superior accountability performance and improve the
responsibility fulfillment through real-time supervision and self-reminder.
Related papers
- SafeAgent: Safeguarding LLM Agents via an Automated Risk Simulator [77.86600052899156]
Large Language Model (LLM)-based agents are increasingly deployed in real-world applications.<n>We propose AutoSafe, the first framework that systematically enhances agent safety through fully automated synthetic data generation.<n>We show that AutoSafe boosts safety scores by 45% on average and achieves a 28.91% improvement on real-world tasks.
arXiv Detail & Related papers (2025-05-23T10:56:06Z) - Catastrophic Liability: Managing Systemic Risks in Frontier AI Development [0.0]
frontier AI development poses potential systemic risks that could affect society at a massive scale.
Current practices at many AI labs lack sufficient transparency around safety measures, testing procedures, and governance structures.
We propose a comprehensive approach to safety documentation and accountability in frontier AI development.
arXiv Detail & Related papers (2025-05-01T15:47:14Z) - Safety Aware Task Planning via Large Language Models in Robotics [22.72668275829238]
This paper introduces SAFER (Safety-Aware Framework for Execution in Robotics), a multi-LLM framework designed to embed safety awareness into robotic task planning.
Our framework integrates safety feedback at multiple stages of execution, enabling real-time risk assessment, proactive error correction, and transparent safety evaluation.
arXiv Detail & Related papers (2025-03-19T21:41:10Z) - AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection [47.83354878065321]
We propose AGrail, a lifelong guardrail to enhance agent safety.
AGrail features adaptive safety check generation, effective safety check optimization, and tool compatibility and flexibility.
arXiv Detail & Related papers (2025-02-17T05:12:33Z) - Don't Let Your Robot be Harmful: Responsible Robotic Manipulation [57.70648477564976]
Unthinking execution of human instructions in robotic manipulation can lead to severe safety risks.
We present Safety-as-policy, which includes (i) a world model to automatically generate scenarios containing safety risks and conduct virtual interactions, and (ii) a mental model to infer consequences with reflections.
We show that Safety-as-policy can avoid risks and efficiently complete tasks in both synthetic dataset and real-world experiments.
arXiv Detail & Related papers (2024-11-27T12:27:50Z) - CRMArena: Understanding the Capacity of LLM Agents to Perform Professional CRM Tasks in Realistic Environments [90.29937153770835]
We introduce CRMArena, a benchmark designed to evaluate AI agents on realistic tasks grounded in professional work environments.
We show that state-of-the-art LLM agents succeed in less than 40% of the tasks with ReAct prompting, and less than 55% even with function-calling abilities.
Our findings highlight the need for enhanced agent capabilities in function-calling and rule-following to be deployed in real-world work environments.
arXiv Detail & Related papers (2024-11-04T17:30:51Z) - Safeguarding AI Agents: Developing and Analyzing Safety Architectures [0.0]
This paper addresses the need for safety measures in AI systems that collaborate with human teams.
We propose and evaluate three frameworks to enhance safety protocols in AI agent systems.
We conclude that these frameworks can significantly strengthen the safety and security of AI agent systems.
arXiv Detail & Related papers (2024-09-03T10:14:51Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - The Internet of Responsibilities-Connecting Human Responsibilities using
Big Data and Blockchain [5.030698439873751]
We introduce a novel notion, the Internet of responsibilities, for accountability management.
The system detects and collects responsibilities, and represents risk areas in terms of the positions of the responsibility nodes.
An automatic reminder and assignment system is used to enforce a strict responsibility control without human intervention.
arXiv Detail & Related papers (2023-12-07T22:16:31Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Synergistic Redundancy: Towards Verifiable Safety for Autonomous
Vehicles [10.277825331268179]
We propose Synergistic Redundancy (SR) a safety architecture for complex cyber physical systems, like Autonomous Vehicle (AV)
SR provides verifiable safety guarantees against specific faults by decoupling the mission and safety tasks of the system.
Close coordination with the mission layer allows easier and early detection of safety critical faults in the system.
arXiv Detail & Related papers (2022-09-04T23:52:03Z) - Sustainability Through Cognition Aware Safety Systems -- Next Level
Human-Machine-Interaction [1.847374743273972]
Industrial Safety deals with the physical integrity of humans, machines and the environment when they interact during production scenarios.
The concept of a Cognition Aware Safety System (CASS) is to integrate AI based reasoning about human load, stress, and attention with AI based selection of actions to avoid the triggering of safety stops.
arXiv Detail & Related papers (2021-10-13T19:36:06Z) - Responsibility Management through Responsibility Networks [3.1291878216258064]
We deploy the Internet of Responsibilities (IoR) for responsibility management.
Through the building of IoR framework, hierarchical responsibility management, automated responsibility evaluation at all level and efficient responsibility perception are achieved.
arXiv Detail & Related papers (2021-02-14T21:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.