Recognition of Smoking Gesture Using Smart Watch Technology
- URL: http://arxiv.org/abs/2003.02735v1
- Date: Thu, 5 Mar 2020 16:05:49 GMT
- Title: Recognition of Smoking Gesture Using Smart Watch Technology
- Authors: Casey A. Cole, Bethany Janos, Dien Anshari, James F. Thrasher, Scott
Strayer, and Homayoun Valafar
- Abstract summary: Early identification of smoking gestures can help to initiate the appropriate intervention method and prevent relapses in smoking.
Our experiments indicate 85%-95% success rates in identification of smoking gesture.
We have demonstrated the possibility of using smart watches to perform continuous monitoring of daily activities.
- Score: 0.18472148461613155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diseases resulting from prolonged smoking are the most common preventable
causes of death in the world today. In this report we investigate the success
of utilizing accelerometer sensors in smart watches to identify smoking
gestures. Early identification of smoking gestures can help to initiate the
appropriate intervention method and prevent relapses in smoking. Our
experiments indicate 85%-95% success rates in identification of smoking gesture
among other similar gestures using Artificial Neural Networks (ANNs). Our
investigations concluded that information obtained from the x-dimension of
accelerometers is the best means of identifying the smoking gesture, while y
and z dimensions are helpful in eliminating other gestures such as: eating,
drinking, and scratch of nose. We utilized sensor data from the Apple Watch
during the training of the ANN. Using sensor data from another participant
collected on Pebble Steel, we obtained a smoking identification accuracy of
greater than 90% when using an ANN trained on data previously collected from
the Apple Watch. Finally, we have demonstrated the possibility of using smart
watches to perform continuous monitoring of daily activities.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Information We Can Extract About a User From 'One Minute Mobile
Application Usage' [0.0]
In this paper, we extracted different human activities using accelerometer, magnetometer, and gyroscope sensors of android smartphones.
Using different social media applications, such as Facebook, Instagram, Whatsapp, and Twitter, we extracted the raw sensor values along with the attributes of $29$ subjects.
We extract features from the raw signals and use them to perform classification using different machine learning (ML) algorithms.
arXiv Detail & Related papers (2022-07-27T00:23:11Z) - Human Activity Recognition on Time Series Accelerometer Sensor Data
using LSTM Recurrent Neural Networks [0.2294014185517203]
In this study, we focus on the use of smartwatch accelerometer sensors to recognize eating activity.
We collected sensor data from 10 participants while consuming pizza.
We developed a LSTM-ANN architecture that has demonstrated 90% success in identifying individual bites compared to a puff, medication-taking or jogging activities.
arXiv Detail & Related papers (2022-06-03T19:24:20Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - HAR-GCNN: Deep Graph CNNs for Human Activity Recognition From Highly
Unlabeled Mobile Sensor Data [61.79595926825511]
Acquiring balanced datasets containing accurate activity labels requires humans to correctly annotate and potentially interfere with the subjects' normal activities in real-time.
We propose HAR-GCCN, a deep graph CNN model that leverages the correlation between chronologically adjacent sensor measurements to predict the correct labels for unclassified activities.
Har-GCCN shows superior performance relative to previously used baseline methods, improving classification accuracy by about 25% and up to 68% on different datasets.
arXiv Detail & Related papers (2022-03-07T01:23:46Z) - CovidAlert -- A Wristwatch-based System to Alert Users from Face
Touching [1.9502559508200459]
Face touching is a compulsive human begavior that can not be prevented without making a continuous effort.
We have designed a smartwatch-based solution, CovidAlert, that detects hand transition to face and sends a quick haptic alert to the users.
The overall accuracy of our system is 88.4% with low false negatives and false positives.
arXiv Detail & Related papers (2021-11-30T21:58:50Z) - A Bottom-up method Towards the Automatic and Objective Monitoring of
Smoking Behavior In-the-wild using Wrist-mounted Inertial Sensors [6.955421797534318]
Tobacco consumption has reached global epidemic proportions and is characterized as the leading cause of death and illness.
We present a two-step, bottom-up algorithm towards the automatic and objective monitoring of cigarette-based, smoking behavior during the day.
In the first step, our algorithm performs the detection of individual smoking gestures (i.e., puffs) using an artificial neural network with both convolutional and recurrent layers.
In the second step, we make use of the detected puff density to achieve the temporal localization of smoking sessions that occur throughout the day.
arXiv Detail & Related papers (2021-09-08T07:50:47Z) - Semantic-guided Pixel Sampling for Cloth-Changing Person
Re-identification [80.70419698308064]
This paper proposes a semantic-guided pixel sampling approach for the cloth-changing person re-ID task.
We first recognize the pedestrian's upper clothes and pants, then randomly change them by sampling pixels from other pedestrians.
Our method achieved 65.8% on Rank1 accuracy, which outperforms previous methods with a large margin.
arXiv Detail & Related papers (2021-07-24T03:41:00Z) - Project Achoo: A Practical Model and Application for COVID-19 Detection
from Recordings of Breath, Voice, and Cough [55.45063681652457]
We propose a machine learning method to quickly triage COVID-19 using recordings made on consumer devices.
The approach combines signal processing methods with fine-tuned deep learning networks and provides methods for signal denoising, cough detection and classification.
We have also developed and deployed a mobile application that uses symptoms checker together with voice, breath and cough signals to detect COVID-19 infection.
arXiv Detail & Related papers (2021-07-12T08:07:56Z) - STCNet: Spatio-Temporal Cross Network for Industrial Smoke Detection [52.648906951532155]
We propose a novel Spatio-Temporal Cross Network (STCNet) to recognize industrial smoke emissions.
The proposed STCNet involves a spatial to extract texture features and a temporal pathway to capture smoke motion information.
We show that our STCNet achieves clear improvements on the challenging RISE industrial smoke detection dataset against the best competitors by 6.2%.
arXiv Detail & Related papers (2020-11-10T02:28:47Z) - State Transition Modeling of the Smoking Behavior using LSTM Recurrent
Neural Networks [0.2294014185517203]
In this study, we focus on the use of smartwatch sensors to recognize smoking activity.
Our presented reformulation of the smoking gesture as a state-transition model has demonstrated improvement in detection rates nearing 100%.
In addition, we have begun the utilization of Long-Short-Term Memory (LSTM) neural networks to allow for in-context detection of gestures with accuracy nearing 97%.
arXiv Detail & Related papers (2020-01-07T15:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.