FaceTouch: Detecting hand-to-face touch with supervised contrastive
learning to assist in tracing infectious disease
- URL: http://arxiv.org/abs/2308.12840v1
- Date: Thu, 24 Aug 2023 14:55:38 GMT
- Title: FaceTouch: Detecting hand-to-face touch with supervised contrastive
learning to assist in tracing infectious disease
- Authors: Mohamed R. Ibrahim and Terry Lyons
- Abstract summary: FaceTouch seeks to detect hand-to-face touches in the wild, such as through video chats, bus footage, or CCTV feeds.
This has been demonstrated to be useful in complex urban scenarios beyond simply identifying hand movement and its closeness to faces.
- Score: 6.164223149261533
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Through our respiratory system, many viruses and diseases frequently spread
and pass from one person to another. Covid-19 served as an example of how
crucial it is to track down and cut back on contacts to stop its spread. There
is a clear gap in finding automatic methods that can detect hand-to-face
contact in complex urban scenes or indoors. In this paper, we introduce a
computer vision framework, called FaceTouch, based on deep learning. It
comprises deep sub-models to detect humans and analyse their actions. FaceTouch
seeks to detect hand-to-face touches in the wild, such as through video chats,
bus footage, or CCTV feeds. Despite partial occlusion of faces, the introduced
system learns to detect face touches from the RGB representation of a given
scene by utilising the representation of the body gestures such as arm
movement. This has been demonstrated to be useful in complex urban scenarios
beyond simply identifying hand movement and its closeness to faces. Relying on
Supervised Contrastive Learning, the introduced model is trained on our
collected dataset, given the absence of other benchmark datasets. The framework
shows a strong validation in unseen datasets which opens the door for potential
deployment.
Related papers
- OPENTOUCH: Bringing Full-Hand Touch to Real-World Interaction [93.88239833545623]
We present OpenTouch, the first in-the-wild egocentric full-hand tactile dataset.<n>We show that tactile signals provide a compact yet powerful cue for grasp understanding.<n>We aim to advance multimodal egocentric perception, embodied learning, and contact-rich robotic manipulation.
arXiv Detail & Related papers (2025-12-18T18:18:17Z) - Grasp Like Humans: Learning Generalizable Multi-Fingered Grasping from Human Proprioceptive Sensorimotor Integration [26.351720551267846]
Tactile and kinesthetic perceptions are crucial for human dexterous manipulation, enabling reliable grasping of objects via sensorimotor integration.<n>We propose a novel glove-mediated tactile-kinematic perception-prediction framework for grasp skill transfer from human intuitive and natural operation to robotic execution based on imitation learning.
arXiv Detail & Related papers (2025-09-10T07:44:12Z) - Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper [7.618517580705364]
We present a portable, lightweight gripper with integrated tactile sensors.<n>We propose a cross-modal representation learning framework that integrates visual and tactile signals.<n>We validate our approach on fine-grained tasks such as test tube insertion and pipette-based fluid transfer.
arXiv Detail & Related papers (2025-07-20T17:53:59Z) - TouchInsight: Uncertainty-aware Rapid Touch and Text Input for Mixed Reality from Egocentric Vision [25.271209425555906]
We present a real-time pipeline that detects touch input from all ten fingers on any physical surface.
Our method TouchInsight comprises a neural network to predict the moment of a touch event, the finger making contact, and the touch location.
arXiv Detail & Related papers (2024-10-08T11:42:44Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - Attention for Robot Touch: Tactile Saliency Prediction for Robust
Sim-to-Real Tactile Control [12.302685367517718]
High-resolution tactile sensing can provide accurate information about local contact in contact-rich robotic tasks.
We study a new concept: textittactile saliency for robot touch, inspired by the human touch attention mechanism from neuroscience.
arXiv Detail & Related papers (2023-07-26T21:19:45Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Face Forgery Detection Based on Facial Region Displacement Trajectory
Series [10.338298543908339]
We develop a method for detecting manipulated videos based on the trajectory of the facial region displacement.
This information was used to construct a network for exposing multidimensional artifacts in the trajectory sequences of manipulated videos.
arXiv Detail & Related papers (2022-12-07T14:47:54Z) - Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual
Imitation Learning [62.83590925557013]
We learn a set of challenging partially-observed manipulation tasks from visual and audio inputs.
Our proposed system learns these tasks by combining offline imitation learning from tele-operated demonstrations and online finetuning.
In a set of simulated tasks, we find that our system benefits from using audio, and that by using online interventions we are able to improve the success rate of offline imitation learning by 20%.
arXiv Detail & Related papers (2022-05-30T04:52:58Z) - Towards Predicting Fine Finger Motions from Ultrasound Images via
Kinematic Representation [12.49914980193329]
We study the inference problem of identifying the activation of specific fingers from a sequence of US images.
We consider this task as an important step towards higher adoption rates of robotic prostheses among arm amputees.
arXiv Detail & Related papers (2022-02-10T18:05:09Z) - An Ensemble Model for Face Liveness Detection [2.322052136673525]
We present a passive method to detect face presentation attack using an ensemble deep learning technique.
We propose an ensemble method where multiple features of the face and background regions are learned to predict whether the user is a bonafide or an attacker.
arXiv Detail & Related papers (2022-01-19T12:43:39Z) - A Survey on Masked Facial Detection Methods and Datasets for Fighting
Against COVID-19 [64.88701052813462]
Coronavirus disease 2019 (COVID-19) continues to pose a great challenge to the world since its outbreak.
To fight against the disease, a series of artificial intelligence (AI) techniques are developed and applied to real-world scenarios.
In this paper, we primarily focus on the AI techniques of masked facial detection and related datasets.
arXiv Detail & Related papers (2022-01-13T03:28:20Z) - A Computer Vision System to Help Prevent the Transmission of COVID-19 [79.62140902232628]
The COVID-19 pandemic affects every area of daily life globally.
Health organizations advise social distancing, wearing face mask, and avoiding touching face.
We developed a deep learning-based computer vision system to help prevent the transmission of COVID-19.
arXiv Detail & Related papers (2021-03-16T00:00:04Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.