AR-Facilitated Safety Inspection and Fall Hazard Detection on Construction Sites
- URL: http://arxiv.org/abs/2412.01273v1
- Date: Mon, 02 Dec 2024 08:38:43 GMT
- Title: AR-Facilitated Safety Inspection and Fall Hazard Detection on Construction Sites
- Authors: Jiazhou Liu, Aravinda S. Rao, Fucai Ke, Tim Dwyer, Benjamin Tag, Pari Delir Haghighi,
- Abstract summary: We are exploring the potential of head-mounted augmented reality to facilitate safety inspections on high-rise construction sites.
A particular concern in the industry is inspecting perimeter safety screens on higher levels of construction sites, intended to prevent falls of people and objects.
We aim to support workers performing this inspection task by tracking which parts of the safety screens have been inspected.
We use machine learning to automatically detect gaps in the perimeter screens that require closer inspection and remediation and to automate reporting.
- Score: 17.943278018516416
- License:
- Abstract: Together with industry experts, we are exploring the potential of head-mounted augmented reality to facilitate safety inspections on high-rise construction sites. A particular concern in the industry is inspecting perimeter safety screens on higher levels of construction sites, intended to prevent falls of people and objects. We aim to support workers performing this inspection task by tracking which parts of the safety screens have been inspected. We use machine learning to automatically detect gaps in the perimeter screens that require closer inspection and remediation and to automate reporting. This work-in-progress paper describes the problem, our early progress, concerns around worker privacy, and the possibilities to mitigate these.
Related papers
- Safety Monitoring of Machine Learning Perception Functions: a Survey [7.193217430660011]
New dependability challenges arise when Machine Learning predictions are used in safety-critical applications.
The use of fault tolerance mechanisms, such as safety monitors, is essential to ensure the safe behavior of the system.
This paper presents an extensive literature review on safety monitoring of perception functions using ML in a safety-critical context.
arXiv Detail & Related papers (2024-12-09T10:58:50Z) - On the Role of Attention Heads in Large Language Model Safety [64.51534137177491]
Large language models (LLMs) achieve state-of-the-art performance on multiple language tasks, yet their safety guardrails can be circumvented.
We propose a novel metric which tailored for multi-head attention, the Safety Head ImPortant Score (Ships) to assess the individual heads' contributions to model safety.
arXiv Detail & Related papers (2024-10-17T16:08:06Z) - SCANS: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation Steering [56.92068213969036]
Safety alignment is indispensable for Large Language Models (LLMs) to defend threats from malicious instructions.
Recent researches reveal safety-aligned LLMs prone to reject benign queries due to the exaggerated safety issue.
We propose a Safety-Conscious Activation Steering (SCANS) method to mitigate the exaggerated safety concerns.
arXiv Detail & Related papers (2024-08-21T10:01:34Z) - Irregularity Inspection using Neural Radiance Field [0.0]
Large-scale production machinery is becoming increasingly important.
It is often challenging for professionals to conduct defect inspections on such large machinery.
We propose a system based on neural network modeling (NeRF) of 3D twin models.
arXiv Detail & Related papers (2024-08-21T00:14:07Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - AutoRepo: A general framework for multi-modal LLM-based automated
construction reporting [4.406834811182582]
This paper presents a novel framework named AutoRepo for automated generation of construction inspection reports.
The framework was applied and tested on a real-world construction site, demonstrating its potential to expedite the inspection process.
arXiv Detail & Related papers (2023-10-11T23:42:00Z) - Visual Detection of Personal Protective Equipment and Safety Gear on
Industry Workers [49.36909714011171]
We develop a system that will improve workers' safety using a camera that will detect the usage of Personal Protective Equipment (PPE)
Our focus is to implement our system into an entry control point where workers must present themselves to obtain access to a restricted area.
A novelty of this work is that we increase the number of classes to five objects (hardhat, safety vest, safety gloves, safety glasses, and hearing protection)
arXiv Detail & Related papers (2022-12-09T11:50:03Z) - An Industrial Workplace Alerting and Monitoring Platform to Prevent
Workplace Injury and Accidents [0.0]
We propose an industrial workplace alerting and monitoring platform to detect personal protective equipment (PPE) use and classify unsafe activity.
Our proposed method is the first to analyze prolonged actions involving multiple people or objects.
We propose the first open source annotated data set with video data from industrial workplaces annotated with the action classifications and detected PPE.
arXiv Detail & Related papers (2022-10-25T06:35:00Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - DEEVA: A Deep Learning and IoT Based Computer Vision System to Address
Safety and Security of Production Sites in Energy Industry [0.0]
This paper tackles various computer vision related problems such as scene classification, object detection in scenes, semantic segmentation, scene captioning etc.
We developed Deep ExxonMobil Eye for Video Analysis (DEEVA) package to handle scene classification, object detection, semantic segmentation and captioning of scenes.
The results reveal that transfer learning with the RetinaNet object detector is able to detect the presence of workers, different types of vehicles/construction equipment, safety related objects at a high level of accuracy (above 90%)
arXiv Detail & Related papers (2020-03-02T21:26:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.