Synheart Emotion: Privacy-Preserving On-Device Emotion Recognition from Biosignals
- URL: http://arxiv.org/abs/2511.06231v1
- Date: Sun, 09 Nov 2025 05:15:04 GMT
- Title: Synheart Emotion: Privacy-Preserving On-Device Emotion Recognition from Biosignals
- Authors: Henok Ademtew, Israel Goytom,
- Abstract summary: Most emotion recognition systems rely on cloud-based inference, introducing privacy vulnerabilities and latency constraints unsuitable for real-time applications.<n>This work presents a comprehensive evaluation of machine learning architectures for on-device emotion recognition from wrist-based photoplethysmography.<n>We deploy the wrist-only ExtraTrees model optimized via ONNX conversion, achieving a 4.08 MB footprint, 0.05 ms inference latency, and 152x speedup over the original implementation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-computer interaction increasingly demands systems that recognize not only explicit user inputs but also implicit emotional states. While substantial progress has been made in affective computing, most emotion recognition systems rely on cloud-based inference, introducing privacy vulnerabilities and latency constraints unsuitable for real-time applications. This work presents a comprehensive evaluation of machine learning architectures for on-device emotion recognition from wrist-based photoplethysmography (PPG), systematically comparing different models spanning classical ensemble methods, deep neural networks, and transformers on the WESAD stress detection dataset. Results demonstrate that classical ensemble methods substantially outperform deep learning on small physiological datasets, with ExtraTrees achieving F1 = 0.826 on combined features and F1 = 0.623 on wrist-only features, compared to transformers achieving only F1 = 0.509-0.577. We deploy the wrist-only ExtraTrees model optimized via ONNX conversion, achieving a 4.08 MB footprint, 0.05 ms inference latency, and 152x speedup over the original implementation. Furthermore, ONNX optimization yields a 30.5% average storage reduction and 40.1x inference speedup, highlighting the feasibility of privacy-preserving on-device emotion recognition for real-world wearables.
Related papers
- A Cloud-Based Cross-Modal Transformer for Emotion Recognition and Adaptive Human-Computer Interaction [4.6927139685668315]
Cloud-Based Cross-Modal Transformer (CMT) framework for multimodal emotion recognition and adaptive human-computer interaction.<n>Model integrates visual, auditory, and textual signals using pretrained encoders.<n>System enables scalable, low-latency emotion recognition for large-scale user interactions.
arXiv Detail & Related papers (2025-11-21T17:29:16Z) - Neural-Driven Image Editing [51.11173675034121]
Traditional image editing relies on manual prompting, making it labor-intensive and inaccessible to individuals with limited motor control or language abilities.<n>We propose LoongX, a hands-free image editing approach driven by neurophysiological signals.<n>LoongX utilizes state-of-the-art diffusion models trained on a comprehensive dataset of 23,928 image editing pairs.
arXiv Detail & Related papers (2025-07-07T18:31:50Z) - Emotion Detection on User Front-Facing App Interfaces for Enhanced Schedule Optimization: A Machine Learning Approach [0.0]
We present and evaluate two complementary approaches to emotion detection.<n>A biometric-based method utilizing heart rate (HR) data extracted from electrocardiogram (ECG) signals to predict the emotional dimensions of Valence, Arousal, and Dominance; and a behavioral method analyzing computer activity through multiple machine learning models to classify emotions based on fine-grained user interactions such as mouse movements, clicks, and keystroke patterns.<n>Our comparative analysis, from real-world datasets, reveals that while both approaches demonstrate effectiveness, the computer activity-based method delivers superior consistency and accuracy, particularly for mouse-related interactions, which achieved approximately
arXiv Detail & Related papers (2025-06-24T03:21:46Z) - Neural networks for the prediction of peel force for skin adhesive interface using FEM simulation [0.5731930593343312]
We present a neural network-based approach to predict the minimum peel force required for adhesive detachment from skin tissue.<n>Our model achieved high accuracy, validated through rigorous 5-fold cross-validation.<n>This work introduces a reliable, computationally efficient method for predicting adhesive behaviour, significantly reducing simulation time while maintaining accuracy.
arXiv Detail & Related papers (2025-06-09T12:22:00Z) - Interpretable Multi-Task PINN for Emotion Recognition and EDA Prediction [0.0]
This study presents a novel Multi-Task Physics-Informed Neural Network (PINN) that performs Electrodermal Activity (EDA) prediction and emotion classification simultaneously.<n>The model integrates psychological self-report features (PANAS and SAM) with a physics-inspired differential equation representing EDA dynamics.<n>The architecture supports dual outputs for both tasks and is trained under a unified multi-task framework.
arXiv Detail & Related papers (2025-05-14T03:13:51Z) - Synthetic Data Generation of Body Motion Data by Neural Gas Network for Emotion Recognition [0.9790236766474201]
This research introduces a novel application of the Neural Gas Network (NGN) algorithm for synthesizing body motion data.<n>By learning skeletal structure topology, the NGN fits the neurons or gas particles on body joints.<n>By attaching body postures over frames, the final synthetic body motion appears.
arXiv Detail & Related papers (2025-03-11T13:16:30Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.