Development of a conversing and body temperature scanning autonomously
navigating robot to help screen for COVID-19
- URL: http://arxiv.org/abs/2106.09894v1
- Date: Fri, 18 Jun 2021 03:30:11 GMT
- Title: Development of a conversing and body temperature scanning autonomously
navigating robot to help screen for COVID-19
- Authors: Ryan Kim
- Abstract summary: The goal is to develop a functioning solution that performs the above tasks.
An autonomously navigating mobile robot is used with a manipulator controlled using a face tracking algorithm.
recommendations will be made for enhancements that could be incorporated when approaching commercialization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Throughout the COVID-19 pandemic, the most common symptom displayed by
patients has been a fever, leading to the use of temperature scanning as a
preemptive measure to detect potential carriers of the virus. Human employees
with handheld thermometers have been used to fulfill this task, however this
puts them at risk as they cannot be physically distanced and the sequential
nature of this method leads to great inconveniences and inefficiency. The
proposed solution is an autonomously navigating robot capable of conversing and
scanning people's temperature to detect fevers and help screen for COVID-19. To
satisfy this objective, the robot must be able to (1) navigate autonomously,
(2) detect and track people, and (3) get individuals' temperature reading and
converse with them if it exceeds 38{\deg}C. An autonomously navigating mobile
robot is used with a manipulator controlled using a face tracking algorithm,
and an end effector consisting of a thermal camera, smartphone, and chatbot.
The goal is to develop a functioning solution that performs the above tasks. In
addition, technical challenges encountered and their engineering solutions will
be presented, and recommendations will be made for enhancements that could be
incorporated when approaching commercialization.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Task Offloading for Smart Glasses in Healthcare: Enhancing Detection of
Elevated Body Temperature [3.6525326603691504]
This paper focuses on analyzing task-offloading scenarios for a healthcare monitoring application performed on smart wearable glasses.
The study evaluates performance metrics including task completion time, computing capabilities, and energy consumption under realistic conditions.
The findings highlight the potential benefits of task offloading for wearable devices in healthcare settings.
arXiv Detail & Related papers (2023-08-14T14:57:19Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - People detection and social distancing classification in smart cities
for COVID-19 by using thermal images and deep learning algorithms [0.0]
COVID-19 is a disease caused by severe respiratory syndrome coronavirus. It was identified in December 2019 in Wuhan, China.
This research proposes an artificial intelligence system for social distancing classification of persons by using thermal images.
arXiv Detail & Related papers (2022-09-10T16:30:29Z) - BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning [108.41464483878683]
We study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks.
We develop an interactive and flexible imitation learning system that can learn from both demonstrations and interventions.
When scaling data collection on a real robot to more than 100 distinct tasks, we find that this system can perform 24 unseen manipulation tasks with an average success rate of 44%.
arXiv Detail & Related papers (2022-02-04T07:30:48Z) - TsFeX: Contact Tracing Model using Time Series Feature Extraction and
Gradient Boosting [0.0]
This research presents an automated machine learning system for identifying individuals who may have come in contact with others infected with COVID-19.
This paper describes the different approaches followed in arriving at an optimal solution model that effectually predicts whether a person has been in close proximity to an infected individual.
arXiv Detail & Related papers (2021-11-29T11:12:38Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - In-Bed Person Monitoring Using Thermal Infrared Sensors [53.561797148529664]
We use 'Griddy', a prototype with a Panasonic Grid-EYE, a low-resolution infrared thermopile array sensor, which offers more privacy.
For this purpose, two datasets were captured, one (480 images) under constant conditions, and a second one (200 images) under different variations.
We test three machine learning algorithms: Support Vector Machines (SVM), k-Nearest Neighbors (k-NN) and Neural Network (NN)
arXiv Detail & Related papers (2021-07-16T15:59:07Z) - Using Conditional Generative Adversarial Networks to Reduce the Effects
of Latency in Robotic Telesurgery [0.0]
In surgery, any micro-delay can injure a patient severely and in some cases, result in fatality.
Current surgical robots use calibrated sensors to measure the position of the arms and tools.
In this work we present a purely optical approach that provides a measurement of the tool position in relation to the patient's tissues.
arXiv Detail & Related papers (2020-10-07T13:40:44Z) - Lio -- A Personal Robot Assistant for Human-Robot Interaction and Care
Applications [0.35390706902408026]
Lio is a mobile robot platform with a multi-functional arm explicitly designed for human-robot interaction and personal care assistant tasks.
Lio is intrinsically safe by having full coverage in soft artificial-leather material as well as having collision detection, limited speed and forces.
During the COVID-19 pandemic, Lio was rapidly adjusted to perform additional functionality like disinfection and remote elevated body temperature detection.
arXiv Detail & Related papers (2020-06-16T09:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.